Mathematics for Enzyme Reaction Kinetics and Reactor Performance, 2 Volume Set (Enzyme Reaction Engineering) [1 ed.] 1119490286, 9781119490289

Mathematics for Enzyme Reaction Kinetics and Reactor Performance is the first set in a unique 11 volume-collection on En

859 123 16MB

English Pages 1072 [1042] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Mathematics for Enzyme Reaction Kinetics and Reactor Performance, 2 Volume Set (Enzyme Reaction Engineering) [1 ed.]
 1119490286, 9781119490289

Citation preview

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Enzyme Reactor Engineering: Forthcoming Titles ANALYSIS OF ENZYME REACTOR PERFORMANCE Volume 1 1. Ideal Reactors: Single Unit Volume 2 2. Ideal Reactors: Multiple Units Volume 3 3. Nonideal Reactors: Homogeneous with Convection 4. Nonideal Reactors: Homogeneous and Heterogeneous with Diffusion Volume 4 5. Integration of Chemical Reaction and Physical Separation Volume 5 6. Integration of Chemical Reactor and External Control ANALYSIS OF ENZYME REACTION KINETICS Volume 1 1. Mathematical Approach to Rate Expressions 2. Statistical Approach to Rate Expressions Volume 2 3. Physical Modulation of Reaction Rate 4. Chemical Modulation of Reaction Rate ENZYME REACTION KINETICS AND REACTOR PERFORMANCE Volume 1 1. Basic Concepts of Reactions and Reactors 2. Basic Concepts of Hydrodynamics Volume 2 3. Basic Concepts of Mass Transfer 4. Basic Concepts of Enthalpy Transfer 5. Basic Concepts of Chemical Reaction 6. Basic Concepts of Enzymes

Mathematics for Enzyme Reaction Kinetics and Reactor Performance Volume 1

F. Xavier Malcata

Department of Chemical Engineering, University of Porto Portugal

You cannot teach a man anything; you can only help him find it for himself. Galileo Galilei

This edition first published 2019 © 2019 John Wiley & Sons Ltd All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions. The right of F. Xavier Malcata to be identified as the author of this work has been asserted in accordance with the law. Registered Offices John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK Editorial Office The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com. Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats. Limit of Liability/Disclaimer of Warranty In view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Library of Congress Cataloging-in-Publication Data Names: Malcata, F. Xavier, author. Title: Mathematics for enzyme reaction kinetics and reactor performance / F. Xavier Malcata, Department of Chemical Engineering, University of Porto, Portugal. Description: Hoboken : Wiley, [2019-] | Series: Enzyme reaction engineering | Includes bibliographical references and index. | Identifiers: LCCN 2018022263 (print) | LCCN 2018028979 (ebook) | ISBN 9781119490326 (Adobe PDF) | ISBN 9781119490333 (ePub) | ISBN 9781119490289 (volume 1 : hardcover) Subjects: LCSH: Enzyme kinetics–Mathematics. Classification: LCC QP601.3 (ebook) | LCC QP601.3 .M35 2019 (print) | DDC 572/.744–dc23 LC record available at https://lccn.loc.gov/2018022263 Cover Design: Wiley Cover Image: Background © Zarya Maxim Alexandrovich/Shutterstock, Foreground (left) © Laguna Design/Getty Images, (right) Courtesy of F. Xavier Malcata Set in 10/12pt Warnock by SPi Global, Pondicherry, India Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY 10 9 8 7 6 5 4 3 2 1

To my family: António, Mª Engrácia, Ângela, Filipa, and Diogo. For their everlasting understanding, unselfish support, and endless love.

vii

Contents About the Author xv Series Preface xix Preface xxiii Volume 1 Part 1

Basic Concepts of Algebra

1 3

1

Scalars, Vectors, Matrices, and Determinants

2

Function Features 7

2.1 2.1.1 2.1.2 2.1.3 2.2 2.2.1 2.2.2 2.2.3 2.2.4 2.2.5 2.3 2.3.1 2.3.2 2.3.3 2.3.4 2.4 2.4.1 2.4.2 2.4.3 2.4.4

Series 17 Arithmetic Series 17 Geometric Series 19 Arithmetic/Geometric Series 22 Multiplication and Division of Polynomials 26 Product 27 Quotient 28 Factorization 31 Splitting 35 Power 43 Trigonometric Functions 52 Definition and Major Features 52 Angle Transformation Formulae 57 Fundamental Theorem of Trigonometry 73 Inverse Functions 79 Hyperbolic Functions 80 Definition and Major Features 80 Argument Transformation Formulae 85 Euler’s Form of Complex Numbers 89 Inverse Functions 90

3

Vector Operations

3.1 3.2

97 Addition of Vectors 99 Multiplication of Scalar by Vector

101

viii

Contents

3.3 3.4

Scalar Multiplication of Vectors 103 Vector Multiplication of Vectors 111

4

Matrix Operations

4.1 4.2 4.3 4.4 4.5 4.5.1 4.5.2 4.6 4.6.1 4.6.2

119 Addition of Matrices 120 Multiplication of Scalar by Matrix 121 Multiplication of Matrices 124 Transposal of Matrices 131 Inversion of Matrices 133 Full Matrix 134 Block Matrix 138 Combined Features 140 Symmetric Matrix 141 Positive Semidefinite Matrix 142

5

Tensor Operations

6

6.1 6.2 6.2.1 6.2.2 6.2.3 6.2.4 6.2.5 6.3 6.3.1 6.3.2

151 Definition 152 Calculation 157 Laplace’s Theorem 159 Major Features 161 Tridiagonal Matrix 177 Block Matrix 179 Matrix Inversion 181 Eigenvalues and Eigenvectors 185 Characteristic Polynomial 186 Cayley and Hamilton’s Theorem 190

7

Solution of Algebraic Equations 199

7.1 7.1.1 7.1.2 7.1.3 7.1.4 7.2 7.3 7.4 7.4.1 7.4.1.1 7.4.1.2 7.4.2 7.4.2.1 7.4.2.2

Linear Systems of Equations 199 Jacobi’s Method 203 Explicitation 212 Cramer’s Rule 213 Matrix Inversion 216 Quadratic Equation 220 Lambert’s W Function 224 Numerical Approaches 228 Double-initial Estimate Methods 229 Bisection 229 Linear Interpolation 232 Single-initial Estimate Methods 242 Newton and Raphson’s Method 242 Direct Iteration 250

145

Determinants

Further Reading

255

Contents

Volume 2 Part 2

Basic Concepts of Calculus

259

8

Limits, Derivatives, Integrals, and Differential Equations 261

9

Limits and Continuity 263 Univariate Limit 263 Definition 263 Basic Calculation 267 Multivariate Limit 271 Basic Theorems on Limits 272 Definition of Continuity 280 Basic Theorems on Continuity 282 Bolzano’s Theorem 282 Weierstrass’ Theorem 286

9.1 9.1.1 9.1.2 9.2 9.3 9.4 9.5 9.5.1 9.5.2

10.1 10.2 10.2.1 10.2.1.1 10.2.1.2 10.2.1.3 10.2.2 10.2.3 10.2.4 10.2.5 10.2.6 10.2.6.1 10.2.6.2 10.2.6.3 10.2.6.4 10.2.7 10.2.8 10.3 10.4 10.4.1 10.4.2 10.5 10.5.1 10.5.2

291 Differential 291 Derivative 294 Definition 294 Total Derivative 295 Partial Derivatives 300 Directional Derivatives 307 Rules of Differentiation of Univariate Functions 308 Rules of Differentiation of Multivariate Functions 325 Implicit Differentiation 325 Parametric Differentiation 327 Basic Theorems of Differential Calculus 331 Rolle’s Theorem 331 Lagrange’s Theorem 332 Cauchy’s Theorem 334 L’Hôpital’s Rule 337 Derivative of Matrix 349 Derivative of Determinant 356 Dependence Between Functions 358 Optimization of Univariate Continuous Functions 362 Constraint-free 362 Subjected to Constraints 364 Optimization of Multivariate Continuous Functions 367 Constraint-free 367 Subjected to Constraints 371

11

Integrals

10

11.1 11.1.1

Differentials, Derivatives, and Partial Derivatives

373 Univariate Integral 374 Indefinite Integral 374

ix

x

Contents

11.1.1.1 11.1.1.2 11.1.2 11.1.2.1 11.1.2.2 11.1.2.3 11.2 11.2.1 11.2.1.1 11.2.1.2 11.2.2 11.2.2.1 11.2.2.2 11.2.3 11.2.4 11.3 11.4

Definition 374 Rules of Integration 377 Definite Integral 386 Definition 386 Basic Theorems of Integral Calculus 393 Reduction Formulae 396 Multivariate Integral 400 Definition 400 Line Integral 400 Double Integral 403 Basic Theorems 404 Fubini’s Theorem 404 Green’s Theorem 409 Change of Variables 411 Differentiation of Integral 414 Optimization of Single Integral 416 Optimization of Set of Derivatives 424

12

12.1 12.1.1 12.1.2 12.1.3 12.1.4 12.1.5 12.2 12.2.1 12.2.1.1 12.2.1.2 12.2.1.3 12.2.1.4 12.2.1.5 12.2.1.6 12.2.2 12.3 12.3.1 12.3.2 12.3.3

429 Definition and Criteria of Convergence 429 Comparison Test 430 Ratio Test 431 D’Alembert’s Test 432 Cauchy’s Integral Test 434 Leibnitz’s Test 436 Taylor’s Series 437 Analytical Functions 451 Exponential Function 451 Hyperbolic Functions 458 Logarithmic Function 459 Trigonometric Functions 463 Inverse Trigonometric Functions 466 Powers of Binomials 476 Euler’s Infinite Product 479 Gamma Function and Factorial 488 Integral Definition and Major Features 489 Euler’s Definition 494 Stirling’s Approximation 499

13

Analytical Geometry 505

13.1 13.2 13.3 13.4 13.5 13.6

Straight Line 505 Simple Polygons 508 Conical Curves 510 Length of Line 516 Curvature of Line 525 Area of Plane Surface 530

Infinite Series and Integrals

Contents

13.7 13.8

Outer Area of Revolution Solid 536 Volume of Revolution Solid 552

14

Transforms

14.1 14.1.1 14.1.2 14.1.3 14.2

559 Laplace’s Transform 559 Definition 559 Major Features 571 Inversion 582 Legendre’s Transform 590

15

Solution of Differential Equations

15.1 15.1.1 15.1.1.1 15.1.1.2 15.1.2 15.1.2.1 15.1.2.2 15.1.3 15.2 16

16.1 16.1.1 16.1.2 16.1.3 16.1.4 16.1.4.1 16.1.4.2 16.1.4.3 16.1.4.4 16.1.4.5 16.1.4.6 16.1.4.7 16.1.4.8 16.1.4.9 16.1.4.10 16.2 16.2.1 16.2.2 16.3 16.3.1 16.3.2 16.4 16.5

597 Ordinary Differential Equations 597 First Order 598 Nonlinear 598 Linear 600 Second Order 602 Nonlinear 603 Linear 613 Linear Higher Order 650 Partial Differential Equations 660

667 Rectangular Coordinates 667 Definition and Representation 667 Definition of Nabla Operator, ∇ 668 Algebraic Properties of ∇ 673 Multiple Products Involving ∇ 676 Calculation of (∇ ∇)ϕ 676 Calculation of (∇ ∇)u 676 Calculation of ∇ (ϕu) 677 Calculation of ∇ (∇ × u) 679 Calculation of ∇ (ϕ∇ψ) 680 Calculation of ∇ (uu) 682 Calculation of ∇ × (∇ϕ) 684 Calculation of ∇(∇ u) 685 Calculation of (u ∇)u 690 Calculation of ∇ (τ u) 693 Cylindrical Coordinates 695 Definition and Representation 695 Redefinition of Nabla Operator, ∇ 700 Spherical Coordinates 705 Definition and Representation 705 Redefinition of Nabla Operator, ∇ 715 Curvature of Three-dimensional Surfaces Three-dimensional Integration 737 Vector Calculus

729

xi

xii

Contents

17

17.1 17.1.1 17.1.2 17.1.2.1 17.1.2.2 17.1.2.3 17.1.3 17.1.4 17.2 17.2.1 17.2.2 17.2.3 17.2.3.1 17.2.3.2 17.2.3.3 17.2.4

Numerical Approaches to Integration 741 Calculation of Definite Integrals 741 Zeroth Order Interpolation 743 First- and Second-Order Interpolation 750 Trapezoidal Rule 751 Simpson’s Rule 754 Higher Order Interpolation 768 Composite Methods 771 Infinite and Multidimensional Integrals 775 Integration of Differential Equations 777 Single-step Methods 779 Multistep Methods 782 Multistage Methods 790 First Order 790 Second Order 790 General Order 793 Integral Versus Differential Equation 801

Part 3 18

18.1 18.2 18.2.1 18.2.2 18.2.3 18.2.4 18.2.4.1 18.2.4.2 18.2.5 18.2.6 18.2.7 18.3 18.3.1 18.3.1.1 18.3.1.2 18.3.1.3 18.3.1.4 18.3.2 18.3.2.1 18.3.2.2 18.3.2.3 18.3.2.4 18.3.2.5 18.3.2.6

Basic Concepts of Statistics

807

809 Basic Statistical Descriptors 810 Normal Distribution 815 Derivation 816 Justification 821 Operational Features 826 Moment-generating Function 829 Single Variable 829 Multiple Variables 835 Standard Probability Density Function 843 Central Limit Theorem 846 Standard Probability Cumulative Function 856 Other Relevant Distributions 859 Lognormal Distribution 859 Probability Density Function 859 Mean and Variance 860 Probability Cumulative Function 863 Mode and Median 865 Chi-square Distribution 866 Probability Density Function 866 Mean and Variance 870 Asymptotic Behavior 871 Probability Cumulative Function 873 Mode and Median 875 Other Features 876

Continuous Probability Functions

Contents

18.3.3 18.3.3.1 18.3.3.2 18.3.3.3 18.3.3.4 18.3.3.5 18.3.4 18.3.4.1 18.3.4.2 18.3.4.3 18.3.4.4 18.3.4.5 18.3.4.6

Student’s t-distribution 878 Probability Density Function 878 Mean and Variance 881 Asymptotic Behavior 885 Probability Cumulative Function 888 Mode and Median 890 Fisher’s F-distribution 891 Probability Density Function 891 Mean and Variance 894 Asymptotic Behavior 899 Probability Cumulative Function 902 Mode and Median 902 Other Features 906

19

Statistical Hypothesis Testing

20

Linear Regression

20.1 20.2 20.3 20.3.1 20.3.2 20.4 20.4.1 20.4.2 20.5 20.6

919

927 Parameter Fitting 928 Residual Characterization 932 Parameter Inference 935 Multivariate Models 935 Univariate Models 939 Unbiased Estimation 942 Multivariate Models 942 Univariate Models 945 Prediction Inference 962 Multivariate Correction 964

Further Reading Index 985

979

xiii

xv

About the Author Par est scientia laboris. (Work is ever the mate of science.) Prof. F. Xavier Malcata was born in Malange (Angola) in 1963, and earned: a B.Sc. degree in Chemical Engineering (5-year program), from the University of Porto (UP, Portugal) in 1986 (with first class honors); a Ph.D. degree in Chemical Engineering (with a distributed minor in Food Science, Statistics and Biochemistry), from the University of Wisconsin (UW, USA) in 1991; an equivalent Doctoral degree in Biotechnology – food science and technology, from the Portuguese Catholic University (UCP, Portugal) in 1998; and a Habilitation degree in Food Science and Engineering, also from UCP, in 2004.

Prof. Malcata has held academic appointments as: Teaching Assistant at UCP in 1985–1987 and at UW in 1988; Lecturer at UW in 1989; Assistant Professor at UCP in 1991–1998; Associate Professor at UCP in 1998–2004; and Full Professor at UCP in 2004–2010, Superior Institute of Maia (ISMAI, Portugal) in 2010–2012, and UP since 2012. He also held professional appointments as: Dean of the College of

xvi

About the Author

Biotechnology of UCP in 1998–2008; President of the Portuguese Society of Biotechnology in 2003–2008; Coordinator of the Northern Chapter of Chemical Engineering of the Portuguese Engineering Accreditation Board in 2004–2009; Official Delegate, in 2002–2013, of the Portuguese Government to the VI and VII Framework Programs of R&D held by the European Union – in such key areas as food quality and safety, and food, agriculture (including fisheries), and biotechnology, respectively; Chief Executive Officer of the University/Industry Extension (nonprofit) Associations AESBUC in 1998–2008 and INTERVIR+ in 2006–2008; and Chief Executive Officer of the Entrepreneurial Biotechnological Support Associations CiDEB in 2005–2008 and INOVAR&CRESCER in 2006–2008. Over the years, the author has received several national and international public recognitions and awards, including: Cristiano P. Spratley Award by UP, in 1985; Centennial Award by UP, in 1986; election for membership in Phi Tau Sigma – honor society of food science (USA), in 1990; election for Sigma Xi – honor society of scientific and engineering research (USA), in 1990; election for Tau Beta Pi – honor society of engineering (USA), in 1991; Ralph H. Potts Memorial Award by American Oil Chemists’ Society (AOCS, USA), in 1991; election for New York Academy of Sciences (USA), in 1992; Foundation Scholar Award – dairy foods division by American Dairy Science Associaton (ADSA, USA), in 1998; decoration as Chevalier dans l’Ordre des Palmes Académiques by French Government, in 1999; Young Scientist Research Award by AOCS, in 2001; Canadian/International Constituency Investigator Award in Physical Sciences and Engineering by Sigma Xi, in 2002 and 2004; Excellence Promotion Award by Portuguese Foundation for Science and Technology (Portugal), in 2005; Danisco International Dairy Science Award by ADSA, in 2007; Edgar Cardoso Innovation Award by the Mayor of Gaia, in 2007; Scientist of the Year Award by European Federation of Food Science and Technology (Netherlands), in 2007; Samuel C. Prescott Award by Institute of Food Technologists (IFT, USA), in 2008; International Leadership Award by International Association of Food Protection (IAFP, USA), in 2008; election for Fellow by IFT, in 2011; Elmer Marth Educator Award by IAFP, in 2011; election for Fellow by International Academy of Food Science and Technology (IAFoST), in 2012; Distinguished Service Award by ADSA, in 2012; election for Fellow by ADSA, in 2013; J. Dairy Sci. Most Cited Paper Award by ADSA, in 2012; William V. Cruess Award for excellence in teaching by IFT, in 2014; and election for Fellow by AOCS, in 2014. Among his many scientific interests, Prof. Malcata has focused his research chiefly on four major areas: theoretical simulation and optimization of enzyme reactors, theoretical optimization of thermodynamically and kinetically controlled processes, production and immobilization of oxidoreductases and hydrolases for industrial applications, and design and optimization of bioreactors to produce and process edible oils. In addition, he has developed work on: microbiological and biochemical characterization and technological improvement of traditional foods, development of nutraceutical ingredients and functional foods, rational application of unit operations to specific agri-food processing, and design and development of novel photobioreactors for cultivation of microalgae, aimed at biofuel or high added-value compound production. To date, he has published more than 400 papers in peer-reviewed international journals that received more than 12000 official citations in all (without self-citations), corresponding to an h-index of 54; he has supervised 30 Ph.D. dissertations successfully concluded; he has written 14 monographs and edited 5 multiauthored books; he has authored more than 50 chapters

About the Author

in edited books and 35 papers in trade journals, besides more than 50 technical publications. He was also a member of about 60 peer-reviewing committees of research projects and fellowships; he has acted as supervisor of 90 individual fellowships, most at Ph.D. and postdoctoral levels, and collaborated in 60 research and development projects – of which he has served as principal investigator in 36; he has participated in 50 organizing/scientific committees of professional meetings; he has delivered 150+ invited lectures worldwide, besides almost 600 volunteer presentations in congresses and workshops; he has served in the editorial board of 5 major journals in the applied biotechnology, and food science and engineering areas; and he has reviewed several hundred manuscripts for journals and encyclopedia. He has been a longstanding member of American Institute of Chemical Engineers, American Chemical Society, IFT, American Association for the Advancement of Science, AOCS, IAFP, and ADSA.

xvii

xix

Series Preface Ad augusta per angusta. (Toward the top, through hard work) Comprehensive mathematical simulation – using mechanistic models as far as possible, constitutes an essential contribution to rationally characterize performance, as well as support design and drive optimization of any enzyme reactor. However, too often studies available in the literature – including text and reference books, deal with extensive modelling of chemical reactors that employ inorganic catalysts, or instead present extensive kinetic analysis of enzymes acting only (and implicitly) in batch apparatuses. Although constraining from an engineering perspective, this status quo is somewhat expected – because chemical engineers typically lack biochemical background, while biochemists miss engineering training. Meanwhile, rising environmental concerns and stricter legislation worldwide have urged the industry to resort to more sustainable, efficient, and cleaner processes – which tend to mimic natural (i.e. enzyme-mediated) pathways; they generate essentially no polluting effluents or residues, require mild conditions of operation, and exhibit low-energy requirements – while taking advantage of the extremely high activity and unique substrate selectivity of enzymes. The advent of genetic engineering has also dramatically contributed to drop the unit price, and enlarge the portfolio of enzymes available for industrial purposes, via overexpression in transformed microorganisms and development of sophisticated purification techniques; and advances in molecular engineering have further permitted specific features, in terms of performance and stability, be imparted to enzymes for tailored uses, besides overcoming their intrinsic susceptibility to decay. An innovative approach is thus in order, where fundamental and applied aspects pertaining to enzyme reactors are comprehensively tackled – built upon mathematical simulation, and encompassing various ideal and nonideal configurations, presented and discussed in a consistent and pragmatic way. Enzyme Reactor Engineering pursues this goal, and accordingly conveys the most integrated and complete treatment of the subject of enzyme reactors to date; it will likely materialize a qualitative leap toward more effective strategies of describing, designing, and optimizing said reactors. More than a mere description of technology, true engineering aspects departing from first principles are put forward, and their rationale is systematically emphasized – with special attention paid to stepwise derivation of the underlying equations, so as to permit a self-paced learning program by any student possessing elementary knowledge of algebra, calculus, and statistics. A careful selection of mathematical tools deemed useful for enzyme

xx

Series Preface

reactors is also provided in dedicated volumes, for the more inquisitive students and practitioners – in a straightforward, yet fully justified manner. Furthermore, appropriate examples, based (at least) on Michaelis and Menten’s enzymatic kinetics and first-order enzyme decay, are worked out in full – for their being representative of industrial situations, while exhibiting a good compromise between practical applicability and mathematical simplicity. In this regard, the present book collection represents an unparalleled way of viewing enzyme reactors – clearly focused on the reactor component but prone to build an integrated picture, including mixture via momentum and mass transfer, and subsequent transformation via chemical reaction, with underlying enthalpic considerations as found necessary. In a word, Enzyme Reactor Engineering attempts to contribute to a thorough understanding of the engineering concepts behind enzyme reactors – framed by a rigorous mathematical and physically consistent approach, and based on mechanistic expressions describing physical phenomena and typical expressions for enzyme-mediated kinetics and enzyme decay. It takes advantage of a multiplicity of mathematical derivations, but ends up with several useful formulae while highlighting general solutions; and covers from basic definitions and biochemical concepts, through ideal models of flow, eventually to models of actual reactor behavior – including interaction with physical separation and external control. The typical layout of each chapter accordingly includes: introductory considerations, which set the framework for each theme in terms of relevance; objective definition, which entails specific goals and usefulness of ensuing results; and mathematical stepwise development, interwoven with clear physicochemical discussion (wherever appropriate), which resort to graphical interpretations and present stepby-step proofs to eventually generate (duly highlighted) milestone formulae. All in all, such an approach is aimed at helping one grasp the essence of descriptive functions, as well as the meaning behind hypothesized parameters and attained optima. Selected papers, chapters, or books are listed at the end, for more in-depth, complementary reading – aimed at reinforcing global overviews. Enzyme Reactor Engineering is organized as four major sets, which support a selfconsistent and -contained book collection: Enzyme Reaction Kinetics and Reactor Performance, Analysis of Enzyme Reaction Kinetics, Analysis of Enzyme Reactor Performance, and Mathematics for Enzyme Reaction Kinetics and Reactor Performance. Such a philosophy is primarily intended to help the prospective learner evolve in their knowledge acquisition steps – although it also constitutes standard material suitable for instructors; and allows the reader to first grasp the supporting concepts before proceeding to a deeper and deeper insight on the detailed kinetics of reactions brought about by generic enzymes, and eventually extending said concepts to overall reactor operation using enzymes. Three levels of description are indeed apparent and sequentially considered in the core of this book collection: macroscopic, or ideal; microscopic, or nonideal in terms of hydrodynamics (including homogeneous, nontrivial flow patterns) and mass transfer (including multiphasic systems); and submicroscopic, or nonideal in terms of mixing. The quality of the approximation increases in this order – but so does the complexity of the mathematical models entertained, and the thoroughness of the experimental data required thereby. This treatise on reactors, using enzymes as catalysts, should be usable by and useful to both (advanced) undergraduate and graduate students interested in the fascinating field of white biotechnology, and typically enrolled in chemical or biochemical engineering

Series Preface

degrees; as well as industrial practitioners involved in conceptual design or concerned with rational optimization of enzyme-mediated processes. Enzyme Reactor Engineering has been conceived for hybrid utilization as both text and reference – since virtually all topics of relevance for enzyme reactors are addressed to some extent, and carefully related to each other. F. Xavier Malcata Professor of Chemical Engineering University of Porto (Portugal)

xxi

xxiii

Preface Quality is not an act, it is a habit. Aristotle Mathematics for Enzyme Reaction Kinetics and Reactor Performance is the first set in a unique 11-volume collection on Enzyme Reactor Engineering. This two-volume set relates specifically to the mathematical background – required for systematic and rational simulation of both reaction kinetics and reactor performance, and to fully understand and capitalize on the modelling concepts developed; it accordingly reviews basic and useful concepts of Algebra (first volume), and Calculus and Statistics (second volume). A brief overview of such native algebraic entities as scalars, vectors, matrices, and determinants constitutes the starting point of the first volume; the major features of germane functions are then addressed – namely, polynomials and series (and their operative algebra), as well as trigonometric and hyperbolic functions. Vector operations ensue, with results either of scalar or vector nature, complemented by tensor/matrix operations and their properties. The calculation of determinants is considered next – with an emphasis on their underlying characteristics, and use to find eigenvalues and -vectors. Finally, exact methods for solution of selected algebraic equations, including sets of linear equations, are addressed – as well as numerical methods for utilization at large. The second volume departs from introduction of seminal concepts in calculus, i.e. limits, derivatives, integrals, and differential equations; limits, along with continuity, are further expanded afterward, covering uni- and multi-variate cases, as well as classical theorems. After recovering the concept of differential and applying it to generate (regular and partial) derivatives, the most important rules of differentiation of functions (in explicit, implicit, and parametric forms) are retrieved – and the most relevant theorems supporting simpler manipulation thereof are reviewed. Once the conditions for independence of functions are put forward, the strategies to optimize uni- and multi-variate functions are tackled – either in the presence or absence of constraints. The concept of integral is finally discussed, in both indefinite and definite forms – and the fundamental theorems are brought on board, along with the rules of integration. Furthermore, optimization of integrals is discussed, as part of calculus of variations. Practical applications of the concept of derivative follow – namely, for development of Taylor’s series and setting of associated convergence criteria; and also of the concept of integral – namely, to define the gamma function and to take advantage of its ubiquitous properties. Due to their relevance in reactor modelling, fundamental concepts in analytical geometry are

xxiv

Preface

recalled – with an emphasis on curves, surfaces, and volumes bearing simple (yet representative) shapes. Finally, the working horse of process modelling is covered to some length – i.e. (ordinary) differential equations, including such useful tools for solution thereof as Laplace’s integral (and Legendre’s derivative) transforms – with a brief excursion to solution of (first order) partial differential equations. The most important methods of analytical solution available are duly reviewed and eventually complemented with simple numerical approaches to integration. The final topic is vector calculus – where the nuclear del operator is introduced, and the most important applications to scalar and vector entities are developed as identities expressed in rectangular coordinates; an extension is made later to cylindrical and spherical coordinates, for the sake of completeness. The second volume ends with a brief coverage of statistics – starting with continuous probability functions and statistical descriptors, and proceeding to discussion in depth of the normal distribution; such other continuous distributions as lognormal, chi-square, Student’s t-, and Fisher’s F-distributions are reviewed next – spanning from mathematical derivation, through calculation of major descriptors, to discussion of most relevant features (including generation of distinct continuous probability functions). Statistical hypothesis testing is addressed next, complemented with the alternative approach of parameter and prediction inference – resorting to linear regression analysis as germane mode of parameter estimation. F. Xavier Malcata Professor of Chemical Engineering University of Porto (Portugal)

1

Part 1 Basic Concepts of Algebra

Reading maketh a full man, conference; a ready man, and writing; an exact man. Francis Bacon

3

1 Scalars, Vectors, Matrices, and Determinants

Quantification of any entity or concept requires association to a numerical scale, so as to permit subsequent abstract reasoning and objective comparability; hence, every measurement carried out in the physicochemical world leads to a number, or scalar. Such numbers may be integer, rational (if expressible in the form p/q, where p and q denote integer numbers), or irrational (if not expressible in the previous form, and normally appearing as an infinite, nonrecurring decimal). If considered together, rational and irrational numbers account for the whole of real numbers – each one represented by a point in a straight line domain. Departing from real numbers, related (yet more general) concepts have been invented; this includes notably the complex numbers, z – defined as an ordered pair of two real numbers, say, z ≡ a + ιb, where a and b denote real numbers and ι denotes −1, the imaginary unit. Therefore, z is represented by a point in a plane domain. In the complex number system, a general nth degree polynomial equation holds exactly (and always) n roots, not necessarily distinct though – as originally realized by Italian mathematicians Niccolò F. Tartaglia and Gerolamo Cardano in the sixteenth century; many concepts relevant for engineering purposes, originally conceived to utilize real numbers (as the only ones adhering to physical evidence), may be easily generalized via complex numbers. The next stage of informational content is vectors – each defined by a triplet (a,b,c), where c also denotes a real number; each one is represented by a point in a volume domain and is often denoted via a bold, lowercase letter (e.g. v). Their usual graphical representation is a straight, arrowed segment linking the origin of a Cartesian system of coordinates to said point – where length (equal to a2 + b2 + c2 , as per Pythagoras’ theorem), coupled with orientations (as per tan{b/a} and tan{c/ a2 + b2 }) fully define a the said triplet. An alternative representation is as [a b c] or

b

– also termed row

c vector or column vector, respectively; when three column vectors are assembled together, a2 a3 a1 a2 a3 a1 say, b1 ,

b2 , and b3 , a matrix results, viz. b1 b2 b3 , termed tensor – which

c1 c2 c3 c1 c2 c3 may also be obtained by joining three row vectors, say, [a1

a2

a3], [b1 b2 b3],

Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

4

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

and [c1 c2 c3]. The concept of matrix may be generalized so as to encompass other possibilities of combination of numbers besides a (3 × 3) layout; in fact, a rectangular (p × q) a 1, 1 a1, 2 a1, q matrix of the form

a2, 1 a2, 2

a2, q

, or [ai,j ; i = 1, 2,…, p; j = 1, 2, …, q] for short,

ap, q ap, 1 ap, 2 may easily be devised. Matrices are particularly useful in that they permit algebraic operations (and the like) be performed once on a set of numbers simultaneously – thus dramatically contributing to bookkeeping, besides their help to structure mathematical reasoning. In specific situations, it is useful to design higher order number structures, such as arrays of (or block) a1 a2 a3 A1, 1 A1, 2 , provided matrices; for instance, b1 b2 b3 may be also represented as A2, 1 A2, 2 c1 c2 c3 b1 b2 b3 , and A2, 2 represent, in turn, that, say, A1,1 ≡ [a1], A1,2 ≡ [a2 a3], A2, 1 c1 c2 c3 smaller matrices. An issue of compatibility arises in terms of the sizes of said blocks, though; for a starting (p × q) matrix A, only (p1 × q1) A1,1, (p1 × q2) A1,2, (p2 × q1) A2,1, and (p2 × q2) A2,2 matrices are allowed – obviously with p1 + p2 = p and q1 + q2 = q. One of the most powerful applications of matrices is in solving sets of linear algebraic equations, say, a1, 1 x1 + a1, 2 x2 = b1

11

a2, 1 x1 + a2, 2 x2 = b2

12

and in its simplest version – where a1,1, a1,2, a2,1, a2,2, b1, and b2 denote real numbers, and x1 and x2 denote variables; if a1,1 0 and a1,1a2,2 − a1,2a2,1 0, then one may start by isolating x1 in Eq. (1.1) as x1 =

b1 −a1, 2 x2 , a1, 1

13

and then replace it in Eq. (1.2) to obtain a2, 1

b1 − a1, 2 x2 + a 2, 2 x 2 = b 2 a1, 1

14

After factoring x2 out, Eq. (1.4) becomes a2, 1 b1 a1, 2 a2, 1 x2 = b2 , + a2, 2 − a1, 1 a1, 1

15

so isolation of x2 eventually gives a 2 , 1 b1 a1, 1 b2 − a2, 1 b1 a1, 1 x2 = a1, 2 a2, 1 = a1, 1 a2, 2 − a1, 2 a2, 1 a2, 2 − a1, 1 b2 −

16

Scalars, Vectors, Matrices, and Determinants

– which yields a solution only when a1,1a2,2 − a1,2a2,1 Eq. (1.3) yields x1 =

0; insertion of Eq. (1.6) back in

b1 a1, 2 a1, 1 b2 −a2, 1 b1 − , a1, 1 a1, 1 a1, 1 a2, 2 −a1, 2 a2, 1

thus justifying why a solution for x1 requires a1,1 0, besides a1,1a2,2 − a1,2a2,1 enforced from the very beginning). Equation (1.6) may be rewritten as

17 0 (as

a1, 1 b 1 x2 =

a2, 1 b 2

18

a1, 1 a1, 2 a2, 1 a2, 2

– provided that one defines a 1, 1 b 1 a2, 1 b 2

a1, 1 b2 −a2, 1 b1 ,

19

complemented with a 1, 1 a1, 2 a2, 1 a2, 2

a1, 1 a2, 2 − a1, 2 a2, 1 ;

1 10

the left-hand sides of Eqs. (1.9) and (1.10) are termed (second-order) determinants. If both sides of Eq. (1.2) were multiplied by −a1,2/a2,2, one would get −

a1, 2 a1, 2 a2, 1 x1 − a1, 2 x2 = − b2 a2, 2 a2, 2

1 11

– so ordered addition of Eqs. (1.1) and (1.11) produces simply a 1, 1 −

a1, 2 a1, 2 a2, 1 x1 = b1 − b2 , a2, 2 a2, 2

1 12

after having x1 factored out; upon multiplication of both sides by a2,2, Eq. (1.12), becomes a1, 1 a2, 2 − a1, 2 a2, 1 x1 = b1 a2, 2 − a1, 2 b2 ,

1 13

with isolation of x1 unfolding x1 =

b1 a2, 2 − a1, 2 b2 a1, 1 a2, 2 − a1, 2 a2, 1

1 14

– a result compatible with Eq. (1.7), once the two fractions are lumped, a1,2a2,1b1 canceled out with its negative afterward, and a1,1 finally dropped from numerator and denominator. Recalling Eq. (1.10), one may redo Eq. (1.14) to b1 a1, 2 x1 =

b2 a2, 2 a1, 1 a1, 2 a2, 1 a2, 2

,

1 15

5

6

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

as long as b 1 a1, 2

b1 a2, 2 − a1, 2 b2

b 2 a2, 2

1 16

is put forward; all forms conveyed by Eqs. (1.9), (1.10), and (1.16) do indeed share the form αl, l αl, 2 α2, l α2, 2

= α1, 1 α2, 2 − α1, 2 α2, 1 ,

1 17

irrespective of the values taken individually by α1,1, α1,2, α2,1, and α2,2. This is why the concept of determinant was devised – representing a scalar, bearing the unique property that its calculation resorts to subtraction of the product of elements in the secondary diagonal from the product of elements in the main diagonal of the accompanying a1, 1 a1, 2 (2 × 2) matrix. In the case of Eq. (1.10), the representation is selected because a2, 1 a2, 2 the underlying set of algebraic equations, see Eqs. (1.1) and (1.2), holds indeed a1, 1 a1, 2 as coefficient matrix. If a set of p algebraic linear equations in p unknowns a2, 1 a2, 2 is considered, viz. a1, 1 x1 + a1, 2 x2 +

+ a1, p xp = b1

a2, 1 x1 + a2, 2 x2 +

+ a2, p xp = b2 …

ap, 1 x1 + ap, 2 x2 +

,

1 18

+ ap, p xp = bp

then the concept of determinant can be extended in very much the same way to produce x1 b1 a1, 2

a1, p

b2 a2, 2

bp ap, 2

=

xp a1, 1 a1, 2

b1

a2, p

a2, 1 a2, 2

b2

ap, p

ap, 1 ap, 2

bp

x2 a 1 , 1 b1

a1, p

a2, p

a 2 , 1 b2

ap, p

ap, 1 bp =

1 a 1, 1 a1, 2

a1, p

a2, 1 a2, 2

a2, p

ap, 1 ap, 2

ap, p

=

=

;

1 19

however, the mode of calculation of higher order determinants is more complex – as it requires previous conversion to p!/2 second-order determinants (as will be explained in due course), with calculation of each one to follow Eq. (1.17).

7

2 Function Features

If a relationship between two real variables, y and x, is such that y becomes determined whenever x is given, then y is said to be a univariate (real-valued) function of (realvariable) x; this is usually denoted as y ≡ y{x}, where x is termed independent variable and y is termed dependent variable. The same value of y may be obtained for more than one value of x, but no more than one value of y is allowed for each value of x. If more than one independent variable exist, say, x1, x2, …, xn, then a multivariate function arises, y ≡ y{x1, x2, …, xn, }. The range of values of x for which y is defined constitutes its interval of definition, and a function may be represented either by an (explicit or implicit) analytical expression relating y to x (preferred), or instead by its plot on a plane (useful and comprehensive, except when x grows unbounded) – whereas selected values of said function may, for convenience, be listed in tabular form. Among the most useful quantitative relationships, polynomial functions stand up – of + a1x + a0, where a0, a1, …, an−1, and an denote the form Pn{x} ≡ anxn + an − 1xn − 1 + (constant) real coefficients and n denotes an integer number; a rational function appears as the ratio of two such polynomials, Pn{x}/Qm{x}, where subscripts n and m denote polynomial degree of numerator and denominator, respectively. Any function y{x} satisfying P{x}ym + Q{x}ym−1 + + U{x}y + V{x} = 0, with m denoting an integer, is said to be algebraic; functions that cannot be defined in terms of a finite number of said polynomials, say, P{x}, Q{x}, …, U{x}, V{x}, are termed transcendental – as is the case of exponential and logarithmic functions, as well as trigonometric functions. A function f is said to be even when f {−x} = f {x} and odd if f {−x} = −f {x}; the vertical axis in a Cartesian system serves as axis of symmetry for the plot of the former, whereas the origin of coordinates serves as center of symmetry for the plot of the latter. Any function may be written as the sum of an even with an odd function; in fact, f x

1 1 f x + f x 2 2 1 = f x + f −x 2

+

1 1 f − x − f −x 2 2

21

1 + f x −f −x 2

upon splitting f {x} in half, adding and subtracting f {−x}/2, and algebraically rearranging afterward. Note that f {x} + f {−x} remains unaltered when the sign of x is changed, while f {x} − f {−x} reverses sign when x is replaced by −x; therefore, ( f {x} + f {−x})/2 is an even function, while ( f {x} − f {−x})/2 is an odd function. Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

When the value of a function repeats itself at regular intervals that are multiples of some T, i.e. f {x + nT} = f {x} with n integer, then such a function is termed periodic of period T; a common example is sine and cosine with period 2π rad, as well as tangent with period π rad (as will be seen below). A (monotonically) increasing function satisfies f x2 > x1 > f x1 , whereas a function is called (monotonically) decreasing otherwise, i.e. when f x2 > x1 < f x1 ; however, a function may change monotony along its defining range. If y ≡ f {x}, then an inverse function f −1{y} may in principle be defined such that f −1{ f {x}} = x – i.e. composition of a function with its inverse retrieves the original argument of the former. The plot of f −1{y} develops around the x-axis in exactly the same way the plot of f {x} develops around the y-axis; in other words, the curve representing f {x} is to be rotated by π rad around the bisector straight line so as to produce the curve describing f −1{y}. Of the several functions worthy of mention for their practical relevance, one may start with absolute value, |x| – defined as x

x

x

−x

x≥0 x 0 from all sides, and further multiplying and dividing the last side by x2/x1, Eq. (2.60) turns to 1 z−1 z 1+z ≥ ≥ z≥2 2 ln z 1+z

2 61

– where z denotes an auxiliary variable satisfying z

x2 ≤1 x1

2 62

A graphical account of Eq. (2.60) is provided in Fig. 2.3. Inspection of the curves therein not only unfolds a clear and systematic positioning of the various means relative to each other – in general agreement with (so far, postulated) Eq. (2.58) – but also indicates a collective convergence to x1 as x2 approaches it (as expected).

Function Features

Figure 2.3 Variation of arithmetic mean (arm), logarithmic mean (lom), geometric mean (gem), and harmonic mean (ham) of positive x1 and x2 < x1, normalized by x1, as a function of their ratio, x2/x1.

100 arm lom 10–1

Mean/x1

gem ham

10–2

10–3 10–3

10–2

x2/x1

10–1

100

To provide a quantitative argument in support of the graphical trends above, one may expand the middle left- and middle right-hand sides of Eq. (2.61) via Taylor’s series (see discussion later), around z = 1, according to 1 z −1 + 1 ≥ 2

z−1 +

ln z z=1

+

2 z3

z=1

+

z z=1

1 z

z−1 − z=1

z−1 3 6 − 4 z 3 1 −1 z 2 2

≥ +

1 1 − 2 2



1 z2

z=1

z=1

3 −5 z 2 2

z=1

2

4

z−1 4

z −1 + z=1

z−1 2 +

,

1 1 −3 − z 2 2 2

z=1

z −1 2

2 63

2

3

z−1 3

+

where 1/2 in the left-hand side was meanwhile splitted as 1 − 1/2; note that such series in z are convergent because 0 < z < 1, as per Eq. (2.62). Straightforward simplification of Eq. (2.63) unfolds 1 z −1 + 1 ≥ 2

z−1 z−1 −

≥ 1+

z−1 2

2

+

z−1 3 z−1 − 3 4

1 1 z −1 z−1 − 2 4 2

2

+

3 z −1 8 3

4

+

3

+

,

2 64

15

16

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where z − 1 may be further dropped off both numerator and denominator of the middle side to give 1+

1 z −1 2

1 z−1 z−1 2 z−1 3 + − + 1− 2 3 4 1 z −1 2 z−1 3 + + ≥ 1 + z−1 − 2 8 16 ≥

;

2 65

after simplifying notation in Eq. (2.65) to ζ 1− ≥ 2

ζ ζ2 ζ3 ≥ 1− − − + 2 8 16

1 ζ ζ ζ + + + 2 3 4 2

1+

3

,

2 66

with the aid of ζ

1 − z ≥ 0,

2 67

one may add ζ/2 to all sides to get 1≥

1

+

ζ ζ ζ 1+ + + + 2 3 4 2

3

ζ ζ2 ζ3 ≥ 1− − + 8 16 2

2 68

– where all terms in the denominator of the middle side are positive, whereas all terms (besides 1) in the right-hand side are negative. Long (polynomial) division of 1 by 1 + ζ/2 + ζ 2/3 + ζ3/4 + (according to an algorithm to be presented below) allows further transformation of Eq. (2.68) to ζ ζ2 ζ3 1 ≥ 1− − − − 2 12 24

+

ζ ζ2 ζ3 ≥ 1− − − 2 8 16

,

2 69

where condensation of terms alike in the middle side unfolds 1 ≥ 1−

ζ2 ζ3 − − 12 24

≥ 1−

ζ2 ζ3 − − 8 16

;

2 70

after having dropped unity from all sides, and then taken their negatives, Eq. (2.70) becomes 0≤

ζ2 ζ3 + + 12 24



ζ2 ζ3 + + 8 16

2 71

– which is a universal condition, since 1/12 < 1/8, 1/24 < 1/16, and so on in terms of pairwise comparison. Similar trends for the relative magnitude of the coefficients of similar powers would be found if the series were truncated after higher order terms – so one concludes on the general validity of Eq. (2.58), based on Eq. (2.71) complemented by Eq. (2.55).

Function Features

2.1

Series

If u1, u2, …, ui, … constitute a given (infinitely long) sequence of numbers, one often needs to calculate the sum of the first n terms thereof – or nth partial sum, Sn, defined as n

ui ,

Sn

2 72

i=1

and also known as series; if the partial sums S1, S2, …, Si, … converge to a finite limit, say, S, according to S = lim Sn ,

2 73



n

then S can be viewed as the infinite series ∞

S=

2 74

ui i=1

– while the said series is termed convergent. Should the sequence of partial sums tend to infinite, or oscillate either finitely or infinitely, then the series would be termed divergent. Despite the great many series that may be devised, two of them possess major practical importance – arithmetic and geometric progressions, as well as their hybrid (i.e. arithmetic–geometric progressions); hence, all three types will be treated below in detail.

2.1.1

Arithmetic Series

Consider a series with n terms, where each term, ui, equals the previous one, ui–1, added to a constant value, k – according to n

ui = u0 + u0 + k + u0 + 2k +

+ u0 + ik +

+ u0 + n− 1 k + u0 + nk

i=0

2 75 or, equivalently, n

n

u0 + ik ;

ui i=0

2 76

i=0

n i = 0 ui

is termed a (finite) arithmetic progression, or arithmetic series, of increment k and first term u0. Equation (2.76) may obviously be reformulated to n

ui = un −nk + un − n − 1 k +

+ un − n −i k +

+ un −2k + un − k + un

i=0

2 77 using the last term, un

u0 + nk,

2 78

17

18

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

instead of the first one, u0, as reference; upon ordered addition of Eqs. (2.75) and (2.77), one obtains n

ui = u0 + u0 + k + u0 + 2k +

2 i=0

+ u0 + ik +

+ un − nk + un − n −1 k +

+ u0 + n− 1 k + u0 + nk

+ un − n− i k +

+ un −2k + un −k + un 2 79

– where cancelation of symmetrical terms reduces Eq. (2.79) to n

2

ui = n + 1 u0 + n + 1 un

2 80

i=0

Upon factoring n + 1 out in the right-hand side, followed by division of both sides by 2, Eq. (2.80) becomes n

ui = n + 1 i=0

u0 + un 2

2 81

– i.e. it looks as n + 1 times the arithmetic average of the first and last terms of the series; insertion of Eq. (2.78) permits transformation to n

ui = n + 1 i=0

u0 + u0 + nk 2

2 82

that breaks down to n

n ui = n + 1 u0 + k 2 i=0

2 83

– valid irrespective of the actual values of u0, k or n. Note that an arithmetic series is never convergent in the sense put forward by Eqs. (2.72) and (2.73), because the magnitude of each individual term keeps increasing without bound as n ∞; this becomes apparent after dividing both sides of Eq. (2.83) by u0 and retrieving Eq. (2.72), i.e. Sn = n+1 u0

1+

nk , 2 u0

2 84

which translates to Fig. 2.4. In the particular case of k = 0, Eq. (2.83) simplifies to n

ui = n + 1 u0 i=0

,

2 85

k =0

consistent with the definition of multiplication – which describes the lowest curve in Fig. 2.4, essentially materialized by a straight line with unit slope for relatively large n; as expected, a large k eventually produces a quadratic growth of Sn with n, in agreement with Eq. (2.84).

Function Features

Figure 2.4 Variation of value of n-term arithmetic series, Sn, normalized by first term, u0, as a function of n – for selected values of increment, k, normalized also by u0.

103 21

0.5 0 = k/u0

Sn/u0

102

101

100 100

101

102

103

n

2.1.2

Geometric Series

A geometric series is said to exist when each term is obtained from the previous one via multiplication by a constant parameter, k, viz. n

ui = u0 + u0 k + u0 k 2 +

+ u0 k i +

+ u0 k n

2 86

i=0

– and is said to possess ratio k and first term u0; after factoring u0 out, Eq. (2.86) becomes n

n

ui

ki

u0

i=0

2 87

i=0

An alternative form of calculating n

n i = 0 ui

ensues after first rewriting Eq. (2.87) as

n

ki ,

ui = u0 1 + i=0

2 88

i=1

where the first term of the summation was made apparent; multiplication of both sides by k unfolds n

n

ki ,

ui = u0 k + k

k i=0

2 89

i=1

where straightforward algebraic rearrangement leads to n

n

ki + kn+1

ui = u0

k i=0

2 90

i=1

Ordered subtraction of Eq. (2.90) from Eq. (2.88) generates n

n

ui − k i=0

n

i=0

n

k i −u0

ui = u0 1 + i=1

ki + kn+1 , i=1

2 91

19

Mathematics for Enzyme Reaction Kinetics and Reactor Performance n i = 0 ui

where obtain

may be factored out in the left-hand side and u0 in its right counterpart to n

n

1 −k i=0

n

ki −

ui = u0 1 + i=1

k i −k n + 1 ;

2 92

i=1

upon canceling out symmetrical terms in the right-hand side, and dividing both sides by 1 − k afterward, Eq. (2.92) becomes n

ui = u0 i=0

1−kn+1 1 −k

2 93

Note that k = 1 turns nil both numerator and denominator of Eq. (2.93), so it is a (common) root thereof; Ruffini’s rule (see below) then permits reformulation of Eq. (2.93) to n

ui = u0 1 + k + k 2 +

+ kn

2 94

i=0

that mimics Eq. (2.87), as expected. Revisiting Eq. (2.72) with division of both sides by u0, one may insert Eq. (2.93) to get Sn 1 −k n + 1 ; = u0 1−k

2 95

Eq. (2.95) is illustrated in Fig. 2.5. Upon inspection of this plot, one anticipates a horizontal asymptote for k = 0.5 (besides k = 0); in general, one indeed finds that S 1−kn+1 , = lim u0 n ∞ 1 −k

2 96

at the expense of Eqs. (2.72), (2.73), and (2.95), which is equivalent to S 1 1 − lim k n + 1 = n ∞ u0 1 −k

2 97

Figure 2.5 Variation of value of n-term geometric series, Sn, normalized by first term, u0, as a function of n – for selected values of ratio, k.

103

102

k=2

Sn /u0

20

1 101

0.5 100 100

101

n

102

0

103

Function Features

– and reduces to merely S u0

= k 0. Simultaneous validity of Eqs. (9.33) and (9.44), based on the common hypothesis | f {x} − b| < ε |x − a| < δ{ε}, eventually supports validity of Eq. (9.38). N

Limits and Continuity

Of all possibilities entailed by Eq. (9.2), it is convenient to single out ε > 0, δ > 0

0 < x− a < δ

α x 3, and so on. Consider that interval [a,b] used to define f {x} is split in two equally sized pieces; since there are infinitely many points cm in [a,b] due to its compactness, there must also be infinitely many cm’s in either the left or the right half of said interval – which will be referred to as [a1,b1], irrespective of its location. One may then select one of those infinitely many cm’s belonging to [a1,b1] – which will hereafter be denoted as d1; an infinitely many cm’s will obviously remain in [a1,b1]. If this interval is split in half, and either the left or the right half is denoted as [a2, b2], one may pick one of the infinitely cm’s therein – and label it d2. This process may be carried out over and over again, thus generating a sequence of points d1, d2, d3, …, dn, … – which possesses two important features: (i) any point of said sequence is one of the original cm’s; and (ii) the sequence converges to some point d in [a,b], since the subintervals from which the dn’s are taken out become smaller and smaller in amplitude as n gets larger. The latter feature may be mathematically expressed as lim dn = d,

n

9 186



which implies that f d =f

n

lim dn ∞

9 187

287

288

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

after applying the f operator to both sides of Eq. (9.186); in view of Eq. (9.108), one may transform Eq. (9.187) to f d = lim f dn , n



9 188

or else f d = lim f cm m



9 189

– because every dn was selected as one of the cm’s in the first place. Remember that the cm’s were originally chosen such that the values f {cm} increased to infinity – so the limit in the right-hand side of Eq. (9.189) cannot exist; this implies that f {d} should not exist either. Since Eq. (9.189), coupled with Eq. (9.33) contradict the initial assumption that f {x} is unbounded within closed interval [a,b], the original assumption must be false – thus leaving existence of a (finite) upper bound for f {x} within [a,b] as the only feasible possibility. Note that existence of one such upper bound implies an infinity of upper bounds – as long as they are higher than the former. Let now M denote the smallest of the aforementioned finite upper bounds; it remains to show that there is some point x, comprised between a and b, where f {x} equals M. The corresponding proof will again be developed by contradiction – i.e. one will postulate that there is no value in [a,b] where f {x} = M; an auxiliary function g{x} may consequently (and for convenience) be defined as g x

1 >0 M−f x

9 190

Since, by hypothesis, f {x} does not equal M anywhere, the denominator in Eq. (9.190) cannot be nil; furthermore, M being an upper bound for f {x} implies that M − f {x} > 0, thus justifying the inequality in Eq. (9.190). In view of Eqs. (9.73) and (9.139), M − f {x} will be continuous – since (constant) M and function f {x} are both taken as continuous in the first place; its reciprocal, i.e. g{x} as per Eq. (9.190), will also be continuous due to Eqs. (9.94) and (9.141) pertaining to the ratio of continuous functions 1 and M − f {x}. Remember that the previous derivation guaranteed existence of a maximum of the upper bounds for a continuous function defined in a closed interval – while a similar derivation would guarantee existence of a minimum; therefore, a continuous function is necessarily limited, in agreement with Eq. (9.31). One may accordingly denote the upper bound of g{x} in [a,b] by k, i.e. g x 0, owing to the inequality conveyed by Eq. (9.190); consequently, M − 1/k is lower than M, thus allowing reformulation of Eq. (9.194) to 1 f x < M− < M k

9 195

The original postulate that M is the smallest upper bound of f {x} is thus contradicted by Eq. (9.195) – so such a postulate has to be false; consequently, f {x} must attain its lowest upper bound at a point c located somewhere between a and b – or, in other words, f {x} ≤ M ≡ f {c} as claimed by Eq. (9.184). A similar proof may be laid out for the minimum m, attained by f {x} when x spans interval [a,b] as per Eq. (9.185) – but will not be pursued here, for the sake of bookkeeping.

289

291

10 Differentials, Derivatives, and Partial Derivatives

The concept of differential entails a small (tendentially negligible) variation in a variable x, denoted as dx, or a function f {x}, denoted as df {x}; the associated derivative of f {x} with regard to x is nothing but the ratio of said differentials, i.e. df/dx – usually known as Leibnitz’s formulation. In the case of a bivariate function, say, f {x,y}, differentials can be defined for both independent variables, i.e. dx and dy – so partial derivatives will similarly arise, i.e. ∂f/∂x and ∂f/∂y; operator ∂ is equivalent to operator d, except that its use is exclusive to multivariate functions – in that it stresses existence of more than one independent variable.

10.1 Differential In calculus, the differential represents the principal part of the change of a function y = f{x} – and its definition reads dy

df dx, dx

10 1

where df/dx denotes the derivative of f {x} with regard to x; it is normally finite, rather than infinitesimal or infinite – yet the precise meaning of variables dx and df depends on the context of application, and the required level of mathematical accuracy. The concept of differential was indeed introduced via an intuitive (or heuristic) definition by Gottfried W. Leibnitz, a German polymath and philosopher of the eighteenth century; its use was widely criticized until Cauchy defined it based on the derivative – which took the central role thereafter, and left dy free for given dx and df/dx as per Eq. (10.1). A graphical representation of differential is conveyed by Fig. 10.1, and the usefulness of differentials to approximate a function becomes clear from inspection thereof; after viewing dy as a small variation in the vertical direction, viz. dy

f x + dx −f x ,

10 2

one may retrieve Eq. (10.1) to attain a relationship with the corresponding small variation in the horizontal direction, i.e. f x + dx − f x =

df dx dx

Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

10 3

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

df dx f{x + dx}

f{x}

292

df

f{x}

df dx dx

Figure 10.1 Graphical representation of continuous function, with ordinate f {x} and tangent to its plot described by slope df/dx at abscissa x, and ordinate f{x + dx} when said abscissa undergoes increment of dx.

dx

x

0

x + dx x

The actual variation experienced by f {x}, i.e. df ≡ f {x + dx} − f {x}, becomes coincident with the variation predicted by the derivative, i.e. df/dx multiplied by dx, when dx is small enough. In other words, if the interval under scrutiny around a given point is sufficiently narrow, then any function will behave as if it were linear with slope equal to the underlying derivative – so its evolution will be driven by the straight line tangent to its graph at said point. If a bivariate function is considered, then the total differential should be formulated as dy

∂f ∂f dx1 + dx2 + ε dx1 , dx2 ∂x1 ∂x2

dx12 + dx22

10 4

such that lim ε dx1 , dx2 = 0;

dx1 dx2

0 0

10 5

the last term in Eq. (10.4) resembles the extra contribution to a hypothenuse of a right triangle, as per Pythagoras’ theorem – see Eq. (2.431). Since ε is usually much smaller than the first two terms, one often simplifies Eq. (10.4) to dy

∂f ∂f dx1 + dx2 ∂x1 ∂x2

10 6

– or, in the general case, n

dy i=1

∂f dxi ∂xi

10 7

pertaining to a multivariate function in n independent variables x1, x2, …, xn; the latter formulation possesses the further advantage of requiring only partial derivatives as departing information. Inspection of Eq. (10.1) indicates that dy is a function of x – but only through df/dx, since dx represents an arbitrary change in the independent variable that is, in turn, independent of x itself; one may apply the concept of differential also to dy, according to d2 y

d dy

10 8

Differentials, Derivatives, and Partial Derivatives

Application of operator d

dx

d , as per Eq. (10.1), to dy allows transformation of dx

Eq. (10.8) to d2 y =

d df dx dx; dx dx

10 9

and dx being independent of x justifies taking it off the differential sign, viz. d2 y =

d df d2 f d2 f dx dx = 2 dx 2 = 2 dx2 dx dx dx dx

10 10

Here definition of second-order derivative as derivative of the first-order derivative (as will be discussed shortly) was used to advantage, complemented with the simpler notation dx2 for the square of dx (distinct from d2x). Higher order differentials do likewise obey d ny

d nf n dx dx n

10 11

using Eq. (10.10) as template – where dxn was again used as abbreviated form of (dx)n. Since differentials are, by definition, small increments, one readily concludes that dx n

dx n− 1

dx

10 12

when dx 0 – which, together with realization that df/dx, d2f/dx2, …, dnf/dxn are normally finite, implies d ny

d n−1 y

dy

10 13

with the aid of Eq. (10.11); under such circumstances, linear combinations of said differentials break down to merely dy, thus emphasizing the usefulness of the concept conveyed by Eq. (10.1). One common application of differentials is in estimating error of some experimental model or approximate function; Eq. (10.1) may be directly used to simulate propagation of error of the independent variable dx through the dependent variable dy. With regard to the relative error, one should divide both sides of Eq. (10.1) by y (or f, for that matter), according to df df dx dy dx f = dx; = f y dx

10 14

and then multiply and divide the right-hand side by x, viz. df dy f dx = 10 15 y dx x x As will be proven later, the derivative of a logarithm is given by the derivative of its argument divided by the argument itself, so Eq. (10.15) can be rephrased as dy d ln f dx = ; y d ln x x

10 16

293

294

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

hence, the relative error dx/x of the independent variable echoes upon dy/y via the logarithmic derivative, i.e. d ln f/d ln x. One may finally revisit Eq. (10.1) as dΦ f x dy =

dx =

dx

dΦ f df

df dx dx

10 17

encompassing a composite function Φ of f{x} rather than f{x} itself, and resorting to chain differentiation (to be addressed shortly) – where cancellation of dx between numerator and denominator produces dy =

dΦ f df ; df

10 18

in other words, Φ{ f } may, for the sake of differentiation, be treated as a function of variable f (despite f being a function of x). By the same token, Eq. (10.7) suggests dy =

dΨ f1 , f2 , …, fn ∂Ψ df 1 ∂Ψ df 2 dx = dx + dx + ∂f1 dx ∂f2 dx dx

+

∂Ψ df n dx, ∂fn dx 10 19

where Ψ denotes any set of algebraic operations upon univariate functions f1{x}, f2{x}, …, fn{x} – which leads to n

dy

∂Ψ df , ∂fi i i=1

10 20

again after dropping dx from both numerator and denominator. Therefore, the differential of any function of functions may be calculated via direct application of the rules of differentiation, as long as the differentials of the corresponding constitutive functions are made available.

10.2

Derivative

10.2.1

Definition

The derivative of a function f {x}, at a given point x0, is defined as df x dx

f x lim

x0

h 0

x0 + h

−f x

h

x0

;

10 21

graphically speaking, this corresponds to the limit of a straight line secant to the plot of f {x} at points with coordinates (x0, f x x0 ) and (x0 + h, f x x0 + h ), as emphasized in Fig. 10.2. Assuming a fixed point of abscissa x0 and ordinate f x x0 , the point with coordinates (x0 + h, f x x0 + h ) slides indeed along the curve representing f {x} as h is decreased, thus approaching point with coordinates (x0, f x x0 ) – until actually superimposing said point; the trigonometric tangent of the angle defined by said tangent line df x with the horizontal axis then equals the derivative of f {x} at x0, i.e. . dx x0

Differentials, Derivatives, and Partial Derivatives

Figure 10.2 Graphical representation of continuous function, with ordinate x0

and

tangent

described by slope

df x dx

x0, and ordinate f x

to

its

df dx x0

plot

at abscissa

f{x}

x0

x0 + h

when said

abscissa undergoes increment of h.

f{x}

f x

x0+h

f{x}

x0

0

x0+h

x0 x

Equation (10.21) supports the definition of derivative only at a given point x0, i.e. df x ; if x0 spans the whole domain of f {x}, then a derivative function – denoted dx x0 as df {x}/dx, will result, and such a notation will be utilized hereafter. Furthermore, Leibnitz’s notation for the derivative as a ratio of differentials was elected in Eq. (10.21), and will also be employed from now on; in fact, it immediately indicates which is the dependent variable undergoing differentiation – and which is the independent variable with respect to which differentiation is performed. It also allows performance of algebraic operations directly on differentials df {x} and dx – of special relevance in integration (as will be discussed in due course). 10.2.1.1 Total Derivative

Based on Eqs. (9.30), (9.86), and (9.94) pertaining to the limit of a constant, a difference, and a quotient, respectively, Eq. (10.21) always leads to an unknown quantity, viz. df dx

lim

h 0

f x + h −f x f x −f x 0 = = 0 h 0

10 22

– where x0 was, for simplicity of notation, replaced by x; therefore, some mathematical artifact must be used to circumvent said difficulty. One example pertains to f ≡ xn (where n represents, for the moment, an integer number) – which indeed transforms Eq. (10.21) to dx n dx

lim

h 0

x + h n − xn x + 0 n −x n 0 = = , 0 h 0

10 23

with the aid of Eqs. (9.30) and (9.108); expansion of the nth power of the binomial as per Newton’s formula, labeled as Eq. (2.236), transforms Eq. (10.23) instead to n

n x i h n−i −x n i dx n i=0 = lim dx h 0 h

10 24

295

296

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

The last term, and the one before it in Eq. (10.24), may be singled out as n−2

n x i h n− i + n x n− 1 h + x n − x n i dx n i=0 = lim ; dx h 0 h

10 25

after cancelling out symmetrical terms in numerator, Eq. (10.25) becomes n−2

n x i h n− i + n x n− 1 h i dx n , = lim i = 0 h dx h 0

10 26

thus permitting division of both numerator and denominator by h to get dx n = lim dx h 0

n−2 i=0

n x i h n− i−1 + nx n −1 i

10 27

n x i h n− i−1 + nx n −1 , i

10 28

– or else dx n = lim dx h 0

n−2 i=0

in view of Eqs. (9.30) and (9.73). Since n − i − 1 ≥ n − (n − 2) − 1 = 2 − 1 = 1 > 0, one realizes that all terms of the summation in Eq. (10.28) possess h (or a power thereof, with positive exponent) as factor, so one gets merely dx n = n x n −1 dx

10 29

when Eqs. (9.30), (9.81), and (9.93) are recalled. Furthermore, one finds that d x dx

lim

h 0

x + h− x = h

x− x 0 = 0 0

10 30

based on Eqs. (9.30), (9.108), and (10.21); however, upon multiplication of both numerator and denominator by x + h + x, Eq. (10.30) becomes d x = lim h 0 dx

x + h− x

x+h+ x

10 31

x+h+ x

h

or, upon application of the distributive property of the product of scalars as per Eq. (2.140), 2

x+h − x d x = lim h 0 h dx x+h+ x

2

= lim h 0

h

x + h−x x+h+ x

10 32

Once x and −x have cancelled each other in numerator, one may proceed to cancellation of the resulting h in numerator with h already in denominator – so Eq. (10.32) gives rise to dx n dx

n = 12

d x 1 1 1 = lim = = = nx n −1 h 0 x+h+ x dx x+ x 2 x

1 n=2

;

10 33

Differentials, Derivatives, and Partial Derivatives

Table 10.1 List of derivatives obtained via definition. f {x}

df x dx

xn

n xn − 1

ln x sin x

1 x cos x

cos x

− sin x

hence, the rule conveyed by Eq. (10.29) still applies when n = ½ – and the same holds for every rational or irrational values selected as exponent n (to be proved later), as indicated in Table 10.1. If n = 0 in particular, then Eq. (10.29) produces d1 dx0 = = 0x − 1 = 0 10 34 dx dx – so the derivative of unity is nil; note that 1 being a constant enforces a horizontal straight line for plot of f {x} = 1 – accordingly described by a nil slope. Another example of direct use of the definition of derivative is materialized in ln x, for which Eq. (10.21) reads d ln x dx

lim

h

0

ln x + h − ln x ln x− ln x 0 = = h 0 0

10 35

with the aid of Eqs. (9.30) and (9.108) − which may be rewritten as 1

d ln x 1 x+h x+h h = lim ln = lim ln , 10 36 h 0 h 0 dx h x x in view of the operational features of a logarithm conveyed by Eqs. (2.25) and (2.26), and in attempts to circumvent the unknown quantity produced; after splitting the emerging fraction as two additive terms, one gets d ln x h = lim ln 1 + h 0 dx x

1 h

= lim h 0 x

1+

h x

1 x x h

10 37

as per composition of powers and since x is finite − or else 1 x x h

d ln x = xlim ln dx h ∞

1 1+ x h

1 x x h

= ln

lim

x h



1 1+ x

,

10 38

h

h 1 coincides with lim x , besides the aid of Eq. (9.108). In view of the x x 0 ∞ h h definition of Neper’s number (to be tackled later), one may rewrite Eq. (10.38) as because lim h x

1 d ln x = ln ex ; dx

10 39

297

298

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

composition of the logarithm with the exponential as its inverse function finally gives d ln x 1 = dx x – thus constituting the second entry in Table 10.1. A third possibility pertains to sin x, for which Eq. (10.21) turns to d sin x dx

sin x + h − sin x sin x− sin x 0 = = 0 h 0 0

lim

h

10 40

10 41

upon combination with Eq. (9.108) – where the sine of a sum may, in alternative, be redone as d sin x sin x cos h + cos x sin h− sin x = lim , h 0 dx h

10 42

at the expense of Eq. (2.328); upon application of Eqs. (9.30) and (9.108), it is possible to rewrite Eq. (10.42) as d sin x sin x hlim0 cosh + cos x hlim0 sin h− sin x = dx lim h h 0

sin x cos0 + cos x lim h h 0

=

sin h − sin x h

lim h

10 43

h 0

1 sin x + cos x lim h lim h 0

=

h 0

sin h − sin x h

lim h

h

0

sin x + h 1 cos x− sin x h cos x = lim = lim h 0 h 0 h h sin h , equal to unity – where multiplication and division by h allowed appearance of hlim0 h in agreement with Eq. (9.134), and was coupled with cancellation of symmetrical terms in numerator. Final dropping of h from numerator and denominator in Eq. (10.43) yields d sin x = cos x dx

10 44

– as per the third entry of Table 10.1. By the same token, one may write d cos x dx

lim

h 0

cos x + h − cosx cos x− cosx 0 = = h 0 0

10 45

via blind use of Eqs. (9.30), (9.108), and Eq. (10.21) – where attempts to circumvent the unknown quantity may resort to expansion of cosine of a sum as per Eq. (2.325), i.e. d cos x cosx cosh − sin x sin h − cos x = lim ; h 0 dx h

10 46

Differentials, Derivatives, and Partial Derivatives

after invoking Eqs. (9.30), (9.108), and (9.134) again, one may redo Eq. (10.46) to d cosx cosx hlim0 cos h− sin x hlim0 sin h− cos x = dx lim h h 0

cos x cos 0 − sin x lim h h 0

=

sin h − cosx h

lim h

,

0

h

10 47

sin h − cos x 1 cos x− sin x lim h lim h 0 h 0 h = lim h h 0

= lim h 0

cosx −h 1 sin x− cosx − h sin x = lim h 0 h h

where cos x meanwhile cancelled out with its negative. Should h be taken off both numerator and denominator, Eq. (10.47) would become d cos x = − sin x dx

10 48

– thus materializing the last entry in Table 10.1. The process of calculating a derivative can be applied to the derivative function itself, and as many times as intended; for instance, the second-order derivative of a function, at a point x0, defined as d2 f x dx2

lim

x0

df x dx

− x0 + h

df x dx

x0

10 49

h

h 0

and using Eq. (10.21) as template with df/dx en lieu of f, can be rewritten as f x lim

2

d f x dx2

= lim x0

x0 + h + h

−f x

x0 + h

h

h 0

− lim h 0

h

h 0

f x

x0 + h

−f x

h

x0

10 50

– again with the aid of Eq. (10.21) encompassing f {x} at x0 or x0 + h; and eventually unfolds d2 f x dx2

f x = lim x0

h 0

x0 + 2h

−2f x h2

x0 + h

+f x

x0

,

10 51

after lumping factors or terms (as appropriate). Once again, if x0 moves along the whole domain of df/dx, then one will obtain the corresponding second-order derivative function – denoted as d2f {x}/dx2; its geometrical interpretation pertains to the direction and magnitude of curvature of the concavity of the original curve representing f {x}. A concave curve (i.e. exhibiting a concavity facing upward) will accordingly hold a positive d2f/dx2,

299

300

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

whereas a larger value of d2f/dx2 entails a more pronounced curvature (or deviation from a straight line) – and vice versa. Higher-order derivatives can similarly be defined, according to d nf x dxn

lim

x0

d n−1 f dx n−1

− x0 + h

d n−1 f dx n−1

x0

10 52

h

h 0

that essentially mimics Eq. (10.49), again at the expense of derivatives with immediately lower order; one consequently obtains

lim d nf x dx n

d n− 1 f dx n− 1

− x0 + h + h

d n−1 f dx n−1

h 0

− lim h

= lim x0

x0 + h

h

d n− 1 f dx n− 1

0

− x0 + h

d n −1 f dx n −1

x0

h

h

h 0

10 53 as per the definition, which degenerates to n

d f x dx n

= lim x0

d n− 1 f dx n− 1

−2 x0 + 2h

h 0

d n− 1 f dx n− 1 h2

+ x0 + h

d n−1 f dx n−1

x0

10 54

d n−1 f and iteration of dx n−1 the process until getting to f {x} explicitly, one eventually obtains

After insertion of expressions describing the previous derivative

f x d nf x dx n

+ −1

x0 + nh n− 1

−nf x

nf x

= lim x0

x0 + n− 1 h

x0 + h

+ n 2 f x

+ − 1 nf x

x0 + n−2 h

+

x0

hn

h 0

10 55 – or, using a more condensed notation resorting to binomial coefficients, n

d nf x dx n

−1 = lim x0

i=0

h 0

i

n f x i hn

x0 + n− i h

; n = 1, 2,…;

10 56

the practical usefulness of derivatives with order n ≥ 3 is, nevertheless, marginal – because they become excessively sensitive to disturbances in the departing function f {x}, oftentimes obtained as fit to experimental data subjected themselves to random error. 10.2.1.2 Partial Derivatives

If a multivariate function is at stake, a rationale similar to regular derivatives applies – yet more than one derivative can now be calculated; in the specific case of a bivariate function f{x,y}, one accordingly gets

Differentials, Derivatives, and Partial Derivatives

∂f x,y ∂x

f x, y y x ,y 0 0

x0 , y0

10 57

h

0

h

− f x, y

x0 + h, y0

lim

and ∂f x,y ∂y

f x, y x x ,y 0 0

x0 , y0

10 58

k

0

k

− f x, y

x0 , y0 + k

lim

for the two partial (first-order) derivatives with regard to x and y, respectively. The former may be thought as the ordinary derivative of f{x,y} with respect to x, obtained by treating y as a constant; likewise, the partial derivative of f{x,y} with respect to y may be found by treating x as a constant, and then calculating the ordinary derivative of f{x,y} with respect to y. In both cases, the variable that is supposed to be held constant in the differentiation is indicated as subscript – a notation particularly useful when more than two independent variables are under scrutiny. By the same token, one may define higher order derivatives – namely, ∂ 2 f x,y ∂x2

∂ ∂f x, y ∂x ∂x

x0 , y0

f x, y

y y x0 , y0

x0 + 2h, y0

lim

,

− 2f x, y

x0 + h, y0

+ f x,y

x0 , y0

h2

h 0

10 59 based on Eq. (10.51) applied under constant y; as well as ∂ 2 f x,y ∂x∂y

∂ ∂f x, y ∂x ∂y

x0 , y0

x y x0 , y0

∂f x, y ∂y

x x + h, y 0 0

lim

f x, y lim

k

10 60 x0 + h, y0

k f x, y

k

− f x, y

x0 + h, y0 + k

0

− lim lim

x x ,y 0 0

h

h 0

h

∂f x, y ∂y



x 0 , y0 + k

− f x, y

x0 , y0

k

0

h

0

that degenerates to ∂ 2 f x,y ∂x∂y

f x,y lim

x0 , y0

h k

0 0

x0 + h, y0 + k

− f x, y

x0 + h, y0

hk

− f x,y

x0 , y0 + k

+ f x,y

x0 , y0

,

10 61

301

302

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

based on Eq. (10.21) applied once with regard to x and then once with regard to y; ∂ 2 f x, y ∂y∂x

∂ ∂f x, y ∂y ∂x

x0 , y0

y x x0 , y0

∂f x, y ∂x

y x , y +k 0 0

lim

f x, y lim

f x, y lim

,

10 62

x0 , y0 + k

h − f x,y

x0 + h, y0

− lim

x 0 , y0

h

0

h

− f x, y

x 0 + h , y0 + k

0

h

k

0

k

y x ,y 0 0

k

0

k

∂f x, y ∂x



which simplifies to ∂ 2 f x,y ∂y∂x

f x, y lim

x0 , y0

h k

x0 + h, y0 + k

− f x, y

x0 , y0 + h

− f x,y

x0 + h, y0

+ f x,y

x0 , y0

hk

0 0

10 63 – based again on Eq. (10.21), applied once with regard to y and then once with regard to x; and finally ∂ 2 f x, y ∂y2

∂ ∂f x, y ∂y ∂y

x0 , y0

f x, y lim

k

0

x x x0 , y0

x0 , y0 + 2k

10 64

− 2f x,y

x0 , y0 + κ

+ f x,y

x0 , y0

k2

that materializes Eq. (10.59) after replacement of x by y, and h by k. One realizes that the total number of second-order partial derivatives is four – conveyed by Eqs. (10.59), (10.61), (10.63), and (10.64); in general, the nth order derivatives of a function containing m independent variables add up to mn – instead of just one in the case of univariate functions. In view of the commutative property of addition of scalars, the terms f x, y x0 + h, y0 and f x, y x0 , y0 + k in Eq. (10.61) are interchangeable – and, in doing so, Eq. (10.63) may be promptly generated from Eq. (10.61); one therefore finds that ∂ 2 f x, y ∂ 2 f x, y = ∂x∂y ∂y∂x

10 65

– which constitutes the mathematical statement of both Young’s and Schwarz’s theorems on mixed partial derivatives. The former – due to William H. Young, an English mathematician who lived in the nineteenth century, states that Eq. (10.65) is valid provided that ∂f/∂x and ∂f/∂y exist in a neighborhood of (x0,y0), and are differentiable

Differentials, Derivatives, and Partial Derivatives

therein; the latter – due to Karl H. A. Schwarz, a German mathematician who lived in the nineteenth century, and Alexis C. Clairaut, a French mathematician and astronomer who lived in the eighteenth century, states that if ∂f/∂x and ∂f/∂y exist in a neighborhood of (x0,y0), and either ∂ 2f/∂x∂y or ∂ 2f/∂y∂x is continuous, then the other cross derivative also exists, is continuous, and is identical thereto. Besides the aforementioned interchangeability, it should be stressed that Δf{h, k}, defined as f x, y x0 + h, y0 + k −f x,y x0 + h, y0 − f x, y

x0 , y0 + k

+ f x, y

x0 , y0

, becomes nil when either h = 0 or k = 0 – so the graph

of Δf{h,k} for a fixed k begins at the origin, and has base equal to h and height equal to Δf, while the graph of Δf{h,k} for a fixed h begins also at the origin and has base equal to k and height equal to Δf. Should Δf be continuous (as a consequence of continuity of f in both x- and y-directions), Lagrange’s theorem (to be derived soon) will apply, thus guaranteeing existence of the first-order partial derivatives, ∂f/∂x and ∂f/∂y; a similar reasoning applied to (continuous) ∂f/dx and ∂f/dy, coupled with their definition assure existence of the second-order cross derivatives, which are equal to each other as seen above – thus conveying a formal proof of Young’s and Schwarz’s theorems. Equation (10.65) is obviously extensible to any number of second-order, cross partial derivatives; for instance, ∂2f/∂x∂z = ∂ 2f/∂z∂x and ∂2f/∂y∂z = ∂ 2f/∂z∂y. In the case of composite functions, the chain differentiation rule must take all intermediate variables into account; for instance, if f ≡ f{x, y, u, v}, where in turn u ≡ u{x, y} and v ≡ v{x, y} – with x and y serving as independent variables, then the first partial derivatives read ∂f ∂f = ∂x y ∂x

+ y, u, v

∂f ∂u

x, y, v

∂u ∂f + ∂x y ∂v

x, y, v

∂u ∂f + ∂y x ∂v

x, y, u

∂v ∂x

x, y, u

∂v ; ∂y x

10 66 y

and likewise ∂f ∂f = ∂y x ∂y

+ x, u, v

∂f ∂u

10 67

in other words, the derivative of f with regard to x must take into account not only the direct derivative at stake, i.e. (∂f/∂x)y,u,v, but also the indirect derivatives via u and v, i.e. (∂f/∂u)x,y,v (∂u/∂x)y and (∂f/∂v)x,y,u(∂v/∂x)y, respectively – each one handled via the chain differentiation rule (to be explained shortly). The four second-order partial derivatives will consequently look like ∂ 2 f ∂ ∂f ∂f ∂u ∂f ∂v ∂u ∂ ∂f ∂f ∂u ∂f ∂v + + + + + = ∂x2 ∂x ∂x ∂u ∂x ∂v ∂x ∂x ∂u ∂x ∂u ∂x ∂v ∂x ∂v ∂ ∂f ∂f ∂u ∂f ∂v + + + ∂x ∂v ∂x ∂u ∂x ∂v ∂x

,

10 68

where advantage was again gained from the chain differentiation rule as applied to Eq. (10.66); ∂2 f ∂ ∂f ∂f ∂u ∂f ∂v ∂u ∂ ∂f ∂f ∂u ∂f ∂v = + + + + + ∂x∂y ∂x ∂y ∂u ∂y ∂v ∂y ∂x ∂u ∂y ∂u ∂y ∂v ∂y ∂v ∂ ∂f ∂f ∂u ∂f ∂v + + + ∂x ∂v ∂y ∂u ∂y ∂v ∂y

,

10 69

303

304

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

stemming from partial differentiation of Eq. (10.67) with regard to x; ∂2f ∂ ∂f ∂f ∂u ∂f ∂v ∂u ∂ ∂f ∂f ∂u ∂f ∂v = + + + + + ∂y∂x ∂y ∂x ∂u ∂x ∂v ∂x ∂y ∂u ∂x ∂u ∂x ∂v ∂x

,

∂v ∂ ∂f ∂f ∂u ∂f ∂v + + + ∂y ∂v ∂x ∂u ∂x ∂v ∂x

10 70

based on partial differentiation of Eq. (10.66) with regard to y – which (although not apparent at this stage) does in fact coincide with Eq. (10.69) once f is specified, in general agreement with Eq. (10.65); and finally ∂ 2 f ∂ ∂f ∂f ∂u ∂f ∂v ∂u ∂ ∂f ∂f ∂u ∂f ∂v + + + + + = 2 ∂y ∂y ∂y ∂u ∂y ∂v ∂y ∂y ∂u ∂y ∂u ∂y ∂v ∂y ∂v ∂ ∂f ∂f ∂u ∂f ∂v + + + ∂y ∂v ∂y ∂u ∂y ∂v ∂y

,

10 71

departing from Eq. (10.67) upon partial differentiation with regard to y – where subscripts indicating variable(s) kept constant during partial integration were in all cases omitted to simplify notation. A similar rationale may be followed to calculate higherorder derivatives – yet the underlying complexity increases dramatically. Consider now a bivariate function in two independent variables, x and y – which, for convenience of notation, will be labeled as z, i.e. z

z x,y ;

10 72

and which is subjected to the condition z x,y = 0

10 73

where, by hypothesis y

y x

10 74

– so z ends up being a function of only x, should Eqs. (10.72) and (10.74) be considered simultaneously. Calculation of the total derivative of (univariate) z will accordingly proceed through dz ∂z ∂z dy = , + dx ∂x y ∂y x dx

10 75

arising from the functional form of Eq. (10.72); on the other hand, application of the total derivative to Eq. (10.73) enforces dz =0 dx

10 76

Combination of Eqs. (10.75) and (10.76) leads to ∂z ∂z dy = 0; + ∂x y ∂y x dx

10 77

Differentials, Derivatives, and Partial Derivatives

one may replace Eq. (10.77) by ∂z ∂z + ∂x y ∂y

x

∂y = 0, ∂x z

10 78

where the new notation (∂y/∂x)z emphasizes that dy/dx is valid only when z is a constant, in agreement with Eq. (10.73). After multiplying both sides by (∂x/∂z)y, Eq. (10.78) becomes ∂z ∂x

y

∂x ∂z + ∂z y ∂y

x

∂y ∂x

z

∂x =0 ∂z y

10 79

– which turns equivalent to ∂z ∂x ∂z + ∂x ∂z y ∂y

x

∂y ∂x

z

∂x ∂z =1+ ∂z y ∂y

x

∂y ∂x

z

∂x =0 ∂z y

10 80

upon cancellation of dz and dx between numerator and denominator of the first term; one may finally reformulate Eq. (10.80) to ∂z ∂y

x

∂y ∂x

z

∂x = −1 ∂z y

10 81

Note that dz, dy, and dx cannot drop out for appearing sequentially in numerator and denominator of Eq. (10.81) – as happened with dz and dx between Eqs. (10.79) and (10.80); this is so because the variable kept constant during calculation of the three partial derivatives in Eq. (10.81) is not the same, while y was kept constant during calculation of both ∂z/dx and ∂x/dz in Eq. (10.80). On the other hand, Eq. (10.81) is particularly useful in thermodynamics (and physical chemistry) of constant composition systems – because of the underlying (typical) two degrees of freedom, thus allowing calculation of an intended derivative via another two derivatives that may be easier to access from experimental data. Another useful relationship between partial derivatives corresponds to having one dependent variable, z, and two (truly) independent variables, x and y – when one is interested specifically in a partial derivative under a given constraint, say, constant w; starting with the general definition of total differential as per Eq. (10.6), ∂z ∂z dx + dy, ∂x y ∂y x

dz =

10 82

one may proceed to division of both sides by dx along a line of constant w to get ∂z ∂x

= w

∂z ∂z + ∂x y ∂y

x

∂y ∂x

10 83 w

This general relationship permits change of constraints under two degrees of freedom – as it enables (∂z/∂x)w be calculated from (∂z/∂x)y and (∂z/∂y)x, besides (∂y/∂x)w. A function f {x,y} is said to be homogeneous of degree m when f kx, ky = k m f x, y ,

10 84

305

306

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where k denotes a constant; a similar definition applies to any number of independent variables. In fact, if f {x1, x2, …, xn} denotes a homogeneous differentiable function of degree m in the independent variables x1, x2, …, xn, then x1

∂f ∂f + x2 + ∂x1 ∂x2

+ xn

∂f = mf ; ∂xn

10 85

Eq. (10.85) consubstantiates Euler’s theorem on homogeneous functions. To prove this theorem, one may to advantage define a set of new variables y1, y2, …, yn abiding to kyi ; i = 1, 2, …, n;

xi

10 86

by hypothesis, f {y1, y2, …, yn} is homogeneous of degree m, so Eq. (10.84) supports f x1 , x2 , …, xn

f ky1 , ky2 , …, kyn = k m f y1 , y2 , …, yn

10 87

with the aid of Eq. (10.86). Total differentiation of f{x1, x2, …, xn} with regard to k looks like df ∂f dx1 ∂f dx2 = + + dk ∂x1 dk ∂x2 dk

+

∂f dxn , ∂xn dk

10 88

at the expense of the chain (partial) differentiation rule as in Eq. (10.66) or Eq. (10.67) – where Eq. (10.86) supports transformation to df ∂f ∂f = y1 + y2 + dk ∂x1 ∂x2

+ yn

∂f ; ∂xn

10 89

a similar rationale, applied to both sides of Eq. (10.87), will instead unfold df = mk m −1 f dk

10 90

with the aid of Eq. (10.29) – because the yi’s are independent of k, unlike happens with the xi’s as per Eq. (10.86). Elimination of df/dk between Eqs. (10.89) and (10.90) gives rise to y1

∂f ∂f + y2 + ∂x1 ∂x2

+ yn

∂f = mk m− 1 f , ∂xn

10 91

whereas multiplication of both sides by k yields ky1

∂f ∂f + ky2 + ∂x1 ∂x2

+ kyn

∂f = mk m f ; ∂xn

10 92

Eq. (10.92) may be rewritten as x1

∂f ∂f + x2 + ∂x1 ∂x2

+ xn

∂f = mk m f x1 , x2 , …, xn ∂xn

10 93

Differentials, Derivatives, and Partial Derivatives

in view of Eq. (10.86) – where replacement of kmf {x1, x2, …, xn} by f {kx1, kx2, …, kxn} as per the multivariate analogue of Eq. (10.84) reduces the right-hand side to f, thus proving Eq. (10.85). If f {x1, x2, …, xn} is twice differentiable, then one may differentiate both sides of Eq. (10.85) with regard to variable xj (j = 1, 2, …, n), according to ∂ ∂xj

x1

∂f ∂f + x2 + ∂x1 ∂x2

+ xj

+ xj− 1

∂f ∂f + xj + 1 + ∂xj ∂xj + 1

∂f ∂xj− 1

+ xn

=

∂f ∂xn

∂ mf ∂xj

10 94

where the ( j – 1)th, jth, and ( j + 1)th terms were made explicit for convenience; Eq. (10.94) yields x1

∂2 f ∂2 f + x2 + ∂xj ∂x1 ∂xj ∂x2

+ xj− 1

∂2 f ∂xj ∂xj −1

∂f ∂2 f ∂2 f + + xj 2 + xj + 1 + ∂xj ∂ xj ∂xj ∂xj + 1

10 95

∂2 f ∂f + xn =m ∂xj ∂xn ∂xj

upon application of the classical theorems on differentiation of a sum and of a product (as will be derived next), and knowing that ∂xi/∂xj = 0 when i j and ∂xi/∂xj = 1 when i = 1. The terms ∂f/∂xj may then be moved to the right-hand side to get x1

∂ ∂f ∂ ∂f + x2 + ∂x1 ∂xj ∂x2 ∂xj

+ xn

∂ ∂f ∂f = m−1 , ∂xn ∂xj ∂xj

10 96

together with exchange in order of differentiation of the second-order cross derivatives as allowed by Eq. (10.65). Inspection of Eq. (10.96) vis-à-vis with Eq. (10.85) indicates that Euler’s theorem applies to ∂f/∂xj irrespective of xj (where f is replaced by ∂f/∂xj), with m − 1 serving as degree of homogeneity. 10.2.1.3 Directional Derivatives

Should f {x,y} denote a bivariate function on independent variables x and y, the directional derivative along vector v – defined by angle θ with the x-axis, at a point of coordinates (x0,y0), looks like df x,y dv

f x, y lim

l

x0 , y0

x0 , y0 + lv

− f x,y

l

0

x 0 , y0

10 97

;

here v is described by v

jx cos θ + jy sin θ,

10 98

and its direction is simply denoted as v. One may redo Eq. (10.97) to df x,y dv

f x, y = lim x0 , y0

l

0

x0 + l cos θ, y0 + l sin θ

l

− f x,y

x0 , y0

,

10 99

307

308

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

or else df x, y dv

f x, y

− f x,y

x0 + h, y0

= lim

cos θ

h

h 0

x 0 , y0

x 0 , y0

f x, y

x0 , y0 + k

+ lim

10 100

− f x,y

sin θ

x0 , y0

k – based on the normal projections of the variation of f {x,y}, accompanying l along v, onto the horizontal and vertical axes, via h ≡ l cos θ and k ≡ l sin θ, respectively; in view of Eqs. (10.57) and (10.58), one obtains 0

k

df x,y dv

= x0 , y0

∂f x, y ∂x

cosθ + x0 , y0

∂f x,y ∂y

sin θ

10 101

x0 , y0

from Eq. (10.100). Inspection of Eq. (10.101) indicates that the partial derivatives ∂f/∂x and ∂f/∂y are but particular cases of the directional derivative – attained when θ is 0 or π/2 rad, respectively. 10.2.2

Rules of Differentiation of Univariate Functions

The rule of differentiation of a sum of two functions, f {x} and g{x}, may proceed via direct application of Eq. (10.21), viz. d f x +g x dx

f x +g x

x0 + h

lim

x0

h

h 0

x0

− f x +g x

f x

x0 + h

= lim

+g x

10 102

,

x0 + h

−f x

x0

−g x

x0

h

h 0

which may be rearranged as d f x +g x dx

f x

x0 + h

= lim

−f x

x0

x0 + h

−g x

x0

h

h 0

x0

+ g x

; 10 103

after splitting the fraction, Eq. (10.103) becomes d f x +g x dx

f x = lim

x0 + h

h

h 0

x0

−f x

x0

g x + lim h 0

x0 + h

−g x

x0

h 10 104

with the help of Eq. (9.73). Based on Eq. (10.21), one may reformulate Eq. (10.104) to d f x +g x dx

= x0

df x dx

+ x0

dg x dx

10 105 x0

– and thus, in general, d f x +g x dx

=

df x dg x + , dx dx

10 106

Differentials, Derivatives, and Partial Derivatives

applicable to every value x; Eq. (10.106) may be generalized to d dx

N

N

fi x = i=1

df i x , dx i=1

10 107

following sequential application to every subset of two terms, i.e. f1{x} and N i = 2 fi x , N then f2{x} and i = 3 fi x , and so on. A direct application of this rule is toward calculation of the derivative of the hyperbolic sine, viz. d sinh x d e x − e −x d ex − e −x = = + dx dx dx 2 2 2

=

d ex d e−x 1 de x 1 de −x − + = − , dx 2 dx 2 dx 2 dx 2 10 108

based on Eqs. (2.472) and (10.106); as will be seen below, the derivative of a plain exponential coincides therewith and, together with the chain differentiation rule and the rule of differentiation of a multiple of its argument (all to be addressed next), produce d sinh x e x + e −x = dx 2

cosh x

10 109

also with the aid of Eq. (2.473) – as outlined in Table 10.2. One similarly obtains d cosh x d e x + e −x d ex e − x d ex d e−x 1 de x 1 de − x = = + = + + = 2 2 dx dx dx 2 dx 2 dx 2 2 dx 2 dx 10 110 after differentiating both sides of Eq. (2.473) with regard to x and recalling Eq. (10.106) – which degenerates to d cosh x e x − e −x = 2 dx

sinh x

10 111

with the aid of Eq. (2.472); this result is again tabulated in Table 10.2. A proof of the rule of differentiation of a product of two functions, f {x} and g{x}, may as well proceed by applying Eq. (10.21) as d f x g x dx

f x g x lim

x0

x0 + h

− f x g x

h

h 0

Table 10.2 List of derivatives obtained via theorem of sum of functions. f {x}

df x dx

sinh x

cosh x

cosh x

sinh x

x0

,

10 112

309

310

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

which may be algebraically rearranged as f x

x0

g x d f x g x dx

x0

+ f x

−f x

x0 + h

+ g x

−g x

x0 + h

= lim x0

x0

−f x

x0

g x

x0

x0

h

h 0

10 113 via addition and subtraction of f x x0 to the first factor, and of g x x0 likewise to the second factor; the distributive property of the product of scalars then supports transformation to f x

x0

g x

+ f x

x0 + h

+ f x d f x g x dx

−f x x0

x0 + h

x0

= lim

x0

+f x

−f x

g x

g x

x0

−f x

g x

x0

x0

x0

g x

x0

−g x

x0 + h

x0 + h

−g x

x0

x0

,

h

h 0

10 114 or else f x

+g x

d f x g x dx

g x

x0

x0

+ f x x0

f x

x0 + h

= lim

−g x

x0 + h

x0 + h

−f x

x0

−f x

x0

g x

x0

x0 + h

−g x

x0

h

h 0

g x f x

+ f x

−g x

x0

h

x0

= lim + g x h 0

x0 + h

f x

x0 + h

−f x

x0

h

x0

x0 + h

−f x

g x x0

x0 + h

−g x

x0

h 10 115

upon cancelling f x x0 g x x0 with its negative and factoring h in. Sequential application of Eq. (9.73), and of either Eq. (9.87) or Eq. (9.92) justifies transformation of Eq. (10.115) to

Differentials, Derivatives, and Partial Derivatives

g x

d f x g x dx

=f x

x0 + h

lim

+ lim f x h 0

x0

h

x0 h 0

x0

−g x

x0 + h

−f x

f x +g x g x

x0

lim

lim

x0 + h

−g x

x0

h

x0 h 0

x0 + h

−f x

x0

h

h 0

10 116 – which breaks down to d f x g x dx

dg x x0 dx

=f x x0

+g x x0

df x x0 dx

+ lim f x x0

h

0

x0 + h

−f x

x0

dg x dx

x0

10 117 at the expense of Eq. (10.21); the last term in Eq. (10.117) drops because dg x lim f x x0 + h −f x x0 = f x x0 −f x x0 = 0, while is supposed to be finite dx x0 h 0 – thus leaving merely d f x g x dx

= x0

df x dx

g x x0

x0

+f x

dg x x0 dx

10 118 x0

Eventual extension of Eq. (10.118) to every x0 in the real domain of f {x} or g{x} unfolds d f x g x dx

=

df x dg x g x +f x dx dx

10 119

– which is distinct from the simpler form of Eq. (9.87) pertaining to the limit of a product. Equation (10.119) degenerates to d ag x dx

=a

dg x dx

10 120

whenever f {x} = a, with a denoting a constant; in fact, Eq. (10.21) has it that da dx

a = lim x0

x0 + h

−a

x0

h

h 0

= lim h 0

a− a 0 = lim = lim 0 = 0 h 0h h 0 h

10 121

arises as generalization of Eq. (10.34) – where a − a becomes nil irrespective of the actual value taken by h. This justifies why 0/0 can be immediately circumvented in this case, since a constant is a continuous function as per Eq. (9.145). Combination of Eqs. (10.106) and (10.120) produces, in general, d N a i fi x = dx i = 1

N

ai i=1

df i x dx

10 122

– thus serving as rule for differentiation of a linear combination of functions. If a = N, then Eq. (10.120) becomes d Nf x dx

=N

df x ; dx

10 123

311

312

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Eq. (10.123) may also be obtained from Eq. (10.107) – after setting f1 = f2 = =fN = f, = dfN/dx = df/dx – while recalling the and consequently df1/dx = df2/dx = definition of multiplication as summation of identical terms. On the other hand, a product of N functions yields d N fi x = dx i = 1

N

i=1

df i x dx

N

10 124

fj x j=1 j i

as generalization of Eq. (10.119) – where factor functions are differentiated one at a time; if such functions are identical to f, one accordingly gets d N f x dx i = 1

d N f x = dx

N

i=1

df x dx

N

N

f x = j=1 j i

df x N −1 f x = f N −1 x dx i=1

N

df x dx i=1 10 125

after factoring f N−1 out and recalling the definition of power as product of identical factors – which is equivalent to df N x df x = N f N −1 x , dx dx

10 126

written with the aid of the definition of product itself, see also Table 10.3. Another derivative of interest pertains to the logarithm of a function in a base other than Neper’s number, to be formulated as d loga x d = loga e ln x dx dx

10 127

supported by the rule of change in base of a logarithm labeled as Eq. (2.31); application of Eq. (10.120) gives rise to d loga x d ln x = loga e , dx dx

10 128

Table 10.3 List of derivatives obtained via the theorem on product of functions. f {x}

f N{x} loga x

df x dx

N f N −1 x loga e x 1 x ln a

df x dx

Differentials, Derivatives, and Partial Derivatives

which becomes d loga x loga e = dx x

10 129

after combination with Eq. (10.40) – see also Table 10.3. The very same Eq. (2.31) allows one to write loga e ln a

loga e loge a = loge e

ln e = 1

10 130

after setting x ≡ b ≡ e, and consequently loga e =

1 ; ln a

10 131

hence, Eq. (10.129) often appears as d loga x 1 = , dx x ln a

10 132

see last entry to Table 10.3. Proof of the rule of differentiation of a ratio of two functions, f {x} and g{x}, may as well proceed via direct application of Eq. (10.21), according to f x d f x dx g x

lim

x0

f x g x

f x − g x x0 + h h

h 0

x0

x0 + h

g x



x0 + h

= lim

f x g x

h

h 0

x0 x0

;

10 133

after lumping the two fractions in numerator, one obtains f x d f x dx g x

x0 + h

g x g x

x0

= lim x0

f x

x0 + h

− f x

x0

x0

x0

g x

g x

x0

x0 + h

x0 + h

x0

,

x0

x0

or f x

x0

x0

−f x

g x h

h 0

10 134

to the numerator unfolds

−f x

x0 + h

g x

Upon factoring out of g x becomes

g x

g x

g x

= lim x0

− f x

h

h 0

where addition and subtraction of f x

d f x dx g x

x0

g x

x0

x0 + h

x0

g x

x0

10 135

(as appropriate) in numerator, Eq. (10.135)

313

314

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

g x d f x dx g x

f x

x0

x0 + h

−f x

−f x

x0

g x

x0

= lim

g x

g x

x0 + h

−g x

x0

x0 + h

h

h 0

x0

x0

g x

f x

x0

g x

x0 + h

g x

x0

= lim

−f x

f x

x0

x0 + h

g x

g x



h

h 0

x0

x0

x0 + h

−g x

g x

; x0

x0 + h

h

10 136 Eqs. (9.86), (9.87) and (9.30) may then be invoked in a sequence to write d f x dx g x

g x h 0

x0

g x

− lim h 0

x0

g x



=

x0

g x

f x g x

df x dx

x0

g x x0

x0

g x

x0 + h

−g x

x0

h

g x

x0 + h

f x

x0 + h

lim

−f x

x0

h

h 0

,

10 137

x0

g x

x0

g x

−f x

x0 + h

x0

x0

= g x

x0

x0 + h

h

g x

f x g x

f x

x0

= lim

lim

x0 + h

−g x

x0

h

h 0 x0

x0

g2

−f x x

x0

dg x dx

x0

x0

where Eq. (10.21) was eventually taken advantage of while common factors were lumped. For every x0 belonging to the domain of f {x}, Eq. (10.137) will appear as d f x dx g x

df x dg x g x −f x dx dx = g2 x

10 138

– again rather different, in functional form, from Eq. (9.94); if f {x} ≡ 1, then Eq. (10.138) reduces to d 1 dx g x

dg x = − 2dx , g x

10 139

Differentials, Derivatives, and Partial Derivatives

Table 10.4 List of derivatives obtained via the theorem of quotient of functions. f {x}

df x dx

tan x

sec2x

cotan x

− cosec2 x

cosec x

− cotan x cosec x

sec x

tan x sec x

due to Eq. (10.34) – known as the rule of differentiation of the (arithmetic) reciprocal. One example of application of Eq. (10.138) encompasses tangent, viz. d sin x d cos x cos x− sin x d tan x d sin x dx 10 140 = dx cos2 x dx dx cos x as per Eq. (2.299), where Eqs. (10.44) and (10.48) allow transformation to d tan x cos x cos x − sin x − sin x = ; dx cos2 x

10 141

after straightforward algebraic rearrangement, Eq. (10.141) becomes d tan x cos2 x + sin2 x = 10 142 dx cos2 x – so the fundamental theorem of trigonometry, as per Eq. (2.442), may be retrieved to get d tan x 1 = dx cos2 x

sec2 x

10 143

also with the aid of Eq. (2.309), and as included in Table 10.4. A related example encompasses cotangent, according to d cos x d sin x sin x − cos x d cotan x d cos x dx , 10 144 = dx dx dx sin x sin2 x with the aid of Eqs. (2.304) and (10.138) – where replacement of the derivatives of sin x and cos x, as conveyed by Eqs. (10.44) and (10.48), respectively, generates d cotan x − sin x sin x − cos x cos x ; = dx sin2 x

10 145

condensation of factors and factoring out of −1 unfold d cotan x sin2 x + cos2 x , =− dx sin2 x

10 146

so one may again resort to the fundamental theorem of trigonometry labeled as Eq. (2.442) to get d cotan x 1 =− 2 dx sin x

− cosec2 x

– as per the second entry in Table 10.4.

10 147

315

316

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

An application resorting to Eq. (10.139) encompasses the derivative of cosecant, i.e. d sin x d cosec x d 1 10 148 = − dx2 , dx dx sin x sin x in view of Eq. (2.314); combination with Eq. (10.44) gives rise to d cosec x cos x cos x 1 − 2 =− 10 149 dx sin x sin x sin x Definition of cotan x and cosec x as put forward by Eqs. (2.304) and (2.314), respectively, allows final transformation of Eq. (10.149) to d cosec x = − cotan x cosec x, dx

10 150

as depicted also in Table 10.4. A final example pertains to secant, where application of Eq. (10.139), after recalling Eq. (2.309), has it that d cos x d sec x d 1 10 151 = − dx2 cos x dx dx cos x – so Eq. (10.48) supports transformation to d sec x − sin x sin x 1 =− ; = dx cos x cos x cos2 x

10 152

in view of the definitions of tan x and sec x as conveyed by Eqs. (2.299) and (2.309), respectively, one may redo Eq. (10.152) to d sec x = tan x sec x dx

10 153

– as apparent in Table 10.4 as well. The approach to differentiate an inverse function, f −1{x}, may as well proceed by applying Eq. (10.21) as df − 1 x dx since f − 1 x

f −1 x lim

x0

h 0

x0 + h −f

−1

df − 1 x dx

x0 + h

x0

h x

= x0

− f −1 x

0 as h

x0

0, one may reformulate Eq. (10.154) to read 1 h

lim f

−1

x

x0 + h

−f

−1

x

x0

10 154

;

0

f −1 x

x0 + h

10 155

− f −1 x

x0

after taking the reciprocal of the reciprocal of the right-hand side. Equation (10.155) may, in turn, be rewritten as df − 1 x dx

1

= x0 f −1 x

x0 + h

−f −1 x

10 156

x 0 + h − x0

lim x0

0

f −1 x

x0 + h

−f −1 x

x0

Differentials, Derivatives, and Partial Derivatives

in view of Eqs. (9.30) and (9.94), after adding and subtracting x0; if one sets y

f −1 x ,

10 157

then application of operator f to both sides yields f y = f f −1 x

= x,

10 158

– thus implying x

y = f −1 x

= x0 + h = f y x = x0 + h

y = y0 + k

,

10 159

on the hypothesis that x0 and h in the x domain map into y0 and k in the y domain; consequently, x

y = f −1 x

= x0 = f y

10 160

y0

x = x0 + h

after setting k = 0 (and thus h = 0) in Eq. (10.159). Owing to Eqs. (10.157), (10.159) and (10.160), one may rewrite Eq. (10.156) as df − 1 x dx

1

=

f y

x0

lim

y0 + k −y0

y0 + k

−f y

10 161 y0

y0 + k − y0

0

that breaks down to df − 1 x dx

1

= f y

x0

y0 + k

lim k

−f y

10 162 y0

k

0

after cancelling out y0 with its negative. Because of the definition conveyed by Eq. (10.21), one may redo Eq. (10.162) to df − 1 x dx

= x0

1 df y dy

;

10 163

y0

upon extension to the whole real domain, Eq. (10.163) may finally appear as df − 1 y 1 = df x dy dx

10 164

– where dummy variables x and y were exchanged for consistency of notation. It should be stressed that the derivatives of f and f −1 must be calculated with regard to the corresponding independent variable, and taken at the corresponding coordinate. One example of application of Eq. (10.164) encompasses the derivative of the exponential function, viz. de y 1 , = dy d ln x dx

10 165

317

318

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Table 10.5 List of derivatives obtained via theorem of inverse function. f {x}

df x dx

ex

ex

sin−1x

1

cos−1x



−1

1−x2 1 1− x2

1 1 + x2 1 − 1 + x2

tan x cotan−1x

or else de y 1 = =x dy 1 x in view of Eq. (10.40); should y be defined as ln x,

y

10 166

10 167

one immediately realizes that e y = e ln x = x

10 168

after taking exponentials of both sides – so insertion in Eq. (10.166) produces d ey = ey dy

10 169

as tabulated in Table 10.5 (after swapping y for x), being the only function that coincides with its derivative. Another example pertains to the inverse sine, viz. d sin −1 y 1 = , d sin x dy dx

10 170

upon direct use of Eq. (10.164); recalling Eq. (10.44), one may transform Eq. (10.170) to d sin −1 y 1 = dy cos x

10 171

If one sets x

sin − 1 y,

10 172

then application of sine to both sides unfolds sin x = sin sin −1 x = y,

10 173

Differentials, Derivatives, and Partial Derivatives

as per composition of inverse functions with each other. Due to the fundamental theorem of trigonometry, see Eq. (2.442), one may write cos2 x = 1 − sin2 x

10 174

– or, after taking square roots of both sides, cos x = 1 −sin2 x; insertion of Eq. (10.175) transforms Eq. (10.171) to d sin −1 y = dy

1 1 − sin2 x

,

10 175

10 176

or else d sin −1 y = dy

1 1 − y2

,

10 177

at the expense of Eq. (10.173) – as depicted also in Table 10.5, once y is relabeled as x. By the same token, the inverse cosine entails d cos − 1 y 1 = d cos x dy dx

10 178

based again on Eq. (10.164), where Eq. (10.48) may be invoked to get d cos − 1 y 1 = ; dy − sin x

10 179

upon setting x

cos − 1 y,

10 180

one may take cosine of both sides to obtain cos x = cos cos −1 y = y

10 181

After rearranging Eq. (10.174) to sin2 x = 1 −cos2 x,

10 182

square roots can be taken of both sides as sin x = 1 −cos2 x;

10 183

after Eq. (10.183) is borne in mind, Eq. (10.179) can be rewritten as d cos − 1 y 1 =− , dy 1 − cos2 x which is equivalent to d cos − 1 y =− dy

1 1 − y2

10 184

10 185

after inserting Eq. (10.181) – as tabulated in Table 10.5, upon replacement of y by x.

319

320

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Another possibility pertains to the inverse tangent, viz. d tan −1 y 1 , = 10 186 d tan x dy dx obtained at the expense of Eq. (10.164) once more; this is equivalent to writing d tan −1 y = dy

1 = cos2 x, 1 cos2 x

10 187

after combination with Eq. (10.143). If one postulates that x

tan − 1 y,

10 188

then tangent may be applied to both sides to produce tan x = tan tan −1 y = y,

10 189

whereas Eqs. (2.309) and (2.471) may be revisited as cos2 x =

1 ; 1 + tan2 x

10 190

insertion of Eq. (10.189) justifies transformation of Eq. (10.190) to cos2 x =

1 , 1 + y2

10 191

so Eq. (10.187) eventually becomes d tan −1 y 1 = dy 1 + y2

10 192

– as apparent in Table 10.5, with dummy variables y and x being equivalent to each other. Finally, consider the similar situation of the inverse cotangent, i.e. d cotan −1 y 1 = , d cotan x dy dx

10 193

at the expense of Eq. (10.164), where Eq. (10.147) can be invoked to write d cotan −1 y = dy

1 = − sin2 x; 1 − 2 sin x

10 194

should one define x as x

cotan − 1 y,

10 195

cotangent may then be taken of both sides to obtain cotan x = cotan cotan −1 y = y

10 196

Differentials, Derivatives, and Partial Derivatives

321

The alternative formulation of the fundamental theorem of trigonometry as per Eqs. (2.314) and (2.469) supports sin2 x =

1 ; 1 + cotan2 x

10 197

insertion of Eq. (10.197) converts Eq. (10.194) to d cotan − 1 y 1 =− , dy 1 + cotan2 x

10 198

where Eq. (10.196) permits, in turn, transformation to d cotan −1 y 1 =− dy 1 + y2

10 199

– as included in Table 10.5, after exchanging y for x. A proof for the rule of differentiation of a composite function, f { g {x}}, can now be constructed via application of Eq. (10.21) as f g x

df g x

x0 + h

lim

dx

x0

h

0

h

−f g x

,

10 200

;

10 201

x0

which is equivalent to f g x

df g x

x0 + h

= lim

dx

h

−f g x

x0

h

0

x0

multiplication and division by g x f g x

df g x

x0 + h

= lim

dx

h

0

g x

x0

f g x 0

x0 + h

x0 + h

= lim h

x0 + h −g

g x

x0 + h

x

x0

−f g x −g x

g x

x0

x0 + h

−g x

x0

h x0

−f g x −g x

transforms Eq. (10.201) to

, g x

x0

lim

x0 + h

−g x

x0

h

h 0 x0

10 202 x0 + h −g

also at the expense of Eq. (9.87). Since g x Eq. (10.202) as f g x

df g x =

dx x0

lim g x

x0 + h

−g x

x0

0

g x

x0 + h x0 + h

x

0

0 when h

−f g x −g x

x0

0, one may redo

g x lim

h

0

x0 + h

−g x

x0

h

x0

10 203

;

322

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

owing again to the conceptual definition of derivative conveyed by Eq. (10.21), one may rewrite Eq. (10.203) as df g x

df g x =

dx

dg x dx

dg x

x0

g x

10 204 x0

x0

– which, upon generalization to every x of the domain of g{x} such that g{x} belongs to the domain of f {x}, becomes df g x

df g x =

dx

dg x , dx

dg x

10 205

often referred to as chain diferentiation rule. After recalling the definition of inverse function, i.e. f −1 f y

10 206

y

fully compatible with Eq. (10.158), one may differentiate both sides with regard to y to get d dy f −1 f y 10 207 = ; dy dy application of Eq. (10.29) with n = 1 and Eq. (10.205) then supports transformation to d df y

f −1

f y

df y =1 dy

10 208

since g{x} was hereby replaced by f {y} – which simplifies to df − 1 x df y = 1, dx dy

10 209

consistent again with Eq. (10.158). One may finally isolate df −1{x}/dx in Eq. (10.209) as df − 1 x 1 = df y dx dy

10 210

– where a final exchange of x with y (for being dummy variables) retrieves Eq. (10.164), thus providing an alternative proof for the rule of differentiation of the inverse function. An example of application of the chain differentiation rule as per Eq. (10.205) pertains to the derivative of the logarithm of a function, viz. d ln g x

d ln g x =

dx

dg x

dg x dx

10 211

– where Eq. (10.40) may be retrieved to obtain d ln g x =

dx

1 dg x g x dx

10 212

pending on replacement of x by g{x}; this is included in Table 10.6 for easier reference. A directly related result encompasses f x

xn,

10 213

Differentials, Derivatives, and Partial Derivatives

Table 10.6 List of derivatives obtained via theorem of composite function. df x dx

f {x}

ln{f {x}}

1 df x f x dx

exp g x

exp g x

ax

ax ln a

f {x}g{x}

f x

g x

dg x dx

ln f x

dg x g x df x + dx f x dx

where previous application of logarithms to both sides produces ln f x = n ln x

10 214

in agreement with Eq. (2.25); differentiation of both sides of Eq. (10.214) unfolds d ln f x d = n ln x , dx dx where insertion of Eq. (10.120) leads to d ln f x d ln x =n dx dx – and further insertion of Eqs. (10.40) and (10.212) yields 1 df x 1 =n f x x dx Isolation of the derivative in the left-hand side of Eq. (10.217) gives rise to df x 1 =n f x , x dx whereas combination with Eq. (10.213) generates

10 215

10 216

10 217

10 218

df x 1 = n x n = n x n −1 ; 10 219 x dx Eq. (10.219) confirms validity of the first entry in Table 10.1 for every real n – since no constraint was imposed here upon the value of n, unlike done before when deriving Eq. (10.29) from Eq. (10.23) via the (simplest version of) Newton’s binomial formula. One may also apply Eq. (10.205) to calculate the derivative of the exponential of a function, viz. d exp g x dx

d exp g x =

dg x

dg x ; dx

10 220

upon combination with Eq. (10.169), once y has been replaced by g{x}, Eq. (10.220) becomes d exp g x dx

= exp g x

– as also tabulated in Table 10.6.

dg x dx

10 221

323

324

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

If the base of the exponential function were not Neper’s number, its derivative would become accessible via the composite function theorem, according to da x d exp ln a x , 10 222 = dx dx since the natural exponential is the inverse of the natural logarithm – meaning that their composition cancels out the effects of the operators taken independently; in view of the operational feature of logarithms conveyed by Eq. (2.25), one may write da x d exp x ln a , = dx dx where Eq. (10.205) supports transformation to

10 223

da x d d x ln a = exp x ln a dx d x ln a dx

10 224

Equations (10.120) and (10.169) then support conversion of Eq. (10.224) to da x = exp x ln a ln a, dx or else

10 225

da x = exp ln a x ln a dx again with the aid of Eq. (2.25); one finds that Eq. (10.226) is equivalent to

10 226

da x = a x ln a, dx

10 227

as included in Table 10.6 – because the exponential is the inverse of the logarithm (with the same base). A final example of interest pertains to the derivative of a function, say, f {x}, raised to another function, say, g{x}, according to d d d f x gx = exp ln f x g x exp g x ln f x 10 228 = dx dx dx – where advantage was meanwhile taken of the redundant nature of composing an exponential function with a logarithmic function as per Eq. (2.33), coupled to Eq. (2.25); the rule of differentiation of a composite function as per Eq. (10.205) permits transformation to d exp g x ln f x d g x ln f x d f x gx = 10 229 dx dx d g x ln f x – where the rule of differentiation of an exponential with regard to its argument, i.e. Eq. (10.169), may be used to redo the first factor, while the rule of differentiation of a product, as per Eq. (10.119), may be used to redo the second factor as d f x dx

g x

= exp g x ln f x

dg x d ln f x ln f x + g x dx dx

10 230

Equations (2.25) and (10.212) may again be invoked to transform Eq. (10.230) to d f x dx

g x

= exp ln f x

g x

dg x 1 df x ln f x + g x f x dx dx

; 10 231

Differentials, Derivatives, and Partial Derivatives

straightforward algebraic rearrangement finally leads to d f x dx

g x

=f x

g x

ln f x

dg x g x df x + dx f x dx

,

10 232

based on composition of the exponential function with its inverse – as again included in Table 10.6. 10.2.3

Rules of Differentiation of Multivariate Functions

According to Eq. (10.57), the partial derivative of f{x,y} with regard to x can be calculated via application of Eq. (10.21) – as long as y is maintained constant (at, say, y0); similarly, the analogy between Eqs. (10.58) and (10.21), after having exchanged dummy variables h and k, indicates that ∂f/∂y becomes also accessible via differentiation in the regular mode – under the hypothesis that x remains constant (and equal to, say, x0). Therefore, the rules of partial differentiation are, in essence, analogous to those of univariate differentiation – thus justifying no further discussion here. 10.2.4

Implicit Differentiation

A function y ≡ y{x} is said to be implicit when it is defined via F x,y = 0,

10 233

where F represents some set of algebraic operations on x and y. It is possible to differentiate y with regard to x even when y cannot (or will not) be made explicit on x – by applying the chain partial differentiation rule to Eq. (10.233) as ∂F dx ∂F dy + = 0, ∂x y dx ∂y x dx

10 234

where the right-hand side is nil on account of F in Eq. (10.233) being constant (and coincidentally equal to zero, combined with d0 = 0); since dx/dx is trivially equal to unity as per Eq. (10.29) with n = 1, one may redo Eq. (10.234) to ∂F ∂x dy =− ∂F dx ∂y

y

10 235

x

– known as implicit derivative of y with regard to x. The second-order derivative becomes accessible by recalling its definition, viz. ∂F d2 y d − ∂x = ∂F dx2 dx dy

10 236

325

326

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

as stemming from Eq. (10.235) – where application of Eq. (10.138) unfolds d ∂F ∂F ∂F d ∂F − d2 y dx ∂x ∂y ∂x dx ∂y =− ; dx2 ∂F 2 ∂y

10 237

since ∂F/∂x and ∂F/∂y are, in general, also bivariate functions of x and y, one should proceed as d ∂F ∂ ∂F dy ∂ ∂F = + , 10 238 dx ∂x ∂x ∂x dx ∂y ∂x and likewise d ∂F ∂ ∂F dy ∂ ∂F = + dx ∂y ∂x ∂y dx ∂y ∂y

10 239

Owing again to the definition of second-order derivative, Eq. (10.238) becomes d ∂F ∂ 2 F dy ∂ 2 F = 2+ dx ∂x ∂x dx ∂x∂y

10 240

– also with the aid of Schwarz’s theorem, see Eq. (10.65); whereas Eq. (10.239) readily transforms to d ∂F ∂ 2 F dy ∂ 2 F + = dx ∂y ∂x∂y dx ∂y2

10 241

Insertion of Eqs. (10.240) and (10.241) then converts Eq. (10.237) to ∂F ∂ 2 F dy ∂ 2 F ∂F ∂ 2 F dy ∂ 2 F + − + d y ∂x ∂x∂y dx ∂y2 ∂y ∂x2 dx ∂x∂y = ; dx2 ∂F 2 ∂y 2

10 242

further combination with Eq. (10.235), coupled with elimination of parentheses and cancellation of common factors between numerator and denominator support, in turn, transformation of Eq. (10.242) to ∂F ∂F ∂F ∂ 2 F ∂F ∂x ∂ 2 F ∂F ∂ 2 F ∂F ∂x ∂ 2 F − − + ∂x ∂x∂y ∂x ∂F ∂y2 ∂y ∂x2 ∂y ∂F ∂x∂y d2 y ∂y ∂y 2 = 2 dx ∂F ∂y

=

∂F ∂F ∂ 2 F ∂x − ∂F ∂x ∂x∂y ∂y

10 243

2

∂ 2 F ∂F ∂ 2 F ∂F ∂ 2 F − + ∂y2 ∂y ∂x2 ∂x ∂x∂y ∂F ∂y

2

Differentials, Derivatives, and Partial Derivatives

– while cancellation of symmetrical terms finally yields 2

∂F ∂F ∂ 2 F ∂x + ∂F ∂y ∂x2 d2 y ∂y =− dx2 ∂F 2 ∂y

∂2 F ∂y2 10 244

A similar rationale applies to computation of higher-order derivatives – which are, however, of much lesser practical usefulness. Remember that validity of Eq. (10.233) has led to Eq. (10.234) – which may be rephrased as ∂F ∂x

∂F ∂y

+ y

∂y ∂x

x

= 0,

10 245

F

since dx/dx = 1 and F remains constant during differentiation of y with regard to x; divi∂F sion of both sides by produces ∂x y

1+

∂F ∂y

x

∂F ∂x

∂y ∂x

F

=0

10 246

y

on the hypothesis that

∂F ∂x

0. After revisiting Eq. (10.164) as y

dx 1 = dy dy dx

10 247

with the aid of Eqs. (10.157) and (10.158), one may transform Eq. (10.246) to 1+

∂F ∂y

x

∂y ∂x

F

∂x ∂F

= 0;

10 248

y

upon replacement of F by z and addition of −1 to both sides, Eq. (10.248) retrieves Eq. (10.81) – thus confirming overall consistency, whereas Eq. (10.247) highlights the algebraic advantage of Leibnitz’ notation of a derivative as ratio of two differentials. 10.2.5

Parametric Differentiation

Although a univariate function is normally expressed in the explicit form y ≡ f {x}, it is also possible to express y as a function of x via a third variable (or parameter), say, z – according to x=f z , complemented with

10 249

327

328

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

y=g z ;

10 250

the set of Eqs. (10.249) and (10.250) is called a parametric function. Calculation of the derivative of y with regard to x is again possible in a straightforward fashion, by first assuming that Eq. (10.249) can be reformulated to z = f −1 x ,

10 251

which would allow Eq. (10.250) be rewritten as y = g f −1 x

;

10 252

the chain differentiation rule as per Eq. (10.205) may then be applied to give −1 x dy dg f = dx df −1 x

df −1 x dx

10 253

or, after recalling Eq. (10.251), dy dg dz = dx dz dx

10 254

In view of the rule of differentiation of the inverse function conveyed by Eq. (10.247), one may recoin Eq. (10.254) as dg dy dz = dx dx dz

10 255

– or, after insertion of Eq. (10.249), dg dy dz = ; dx df dz

10 256

note that dg/dz in the numerator of Eq. (10.256) is directly accessible from Eq. (10.250), while df/dz in its denominator becomes readily accessible via Eq. (10.249). It is also possible to reach the same result by separately differentiating Eq. (10.249) with regard to z, viz. dx df = , dz dz

10 257

and likewise Eq. (10.250) also with regard to z, i.e. dy dg = ; dz dz ordered division of Eq. (10.258) by Eq. (10.257) unfolds dy dg dz = dz , dx df dz dz

10 258

10 259

Differentials, Derivatives, and Partial Derivatives

where dz may be dropped from both denominators in the left-hand side to reproduce Eq. (10.256) – again a mere consequence of Leibnitz’s notation. The second-order derivative of y with regard to x ensues from Eq. (10.256) as d2 y dx2

d dy dz d dy dz d = = dx dx dx dz dx dx dz

dg dz df dz

10 260

by resorting to the chain differentiation rule using z as intermediate variable; combination with Eqs. (10.138) and (10.247) transforms Eq. (10.260) to d dg df dg d df − d 2 y 1 dz dz dz dz dz dz = dx2 dx df 2 dz dz

10 261

or, after straightforward algebraic manipulation – namely, lumping of Eq. (10.249) with

df 2 , dz

dx dz

df d 2 g dg d 2 f − d y dz dz2 dz dz2 = dx2 df 3 dz

df as per dz

2

10 262

Here df/dz, dg/dz, d2f/dz2, and d2g/dz2 may all be calculated based on Eqs. (10.249) and (10.250). By the same token, the third derivative of y with regard to x may be obtained from Eq. (10.262), viz.

3

d y dx3

df d 2 g dg d 2 f − d d y dz d d y dz d dz dz2 dz dz2 = = , dx dx2 dx dz dx2 dx dz df 3 dz 2

2

10 263

at the expense of the chain differentiation rule via (intermediate) z; insertion of Eqs. (10.138) and (10.247) again supports transformation to d df d 2 g dg d2 f − d y 1 dz dz dz2 dz dz2 = dx3 dx dz 3

df 3 df d 2 g dg d 2 f d df − − dz dz dz2 dz dz2 dz dz df dz

3 2

3

10 264

329

330

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Application of Eqs. (10.106), (10.119), and (10.205) to the numerator of Eq. (10.264), and lumping of exponents in the denominator with the aid of the differential version of Eq. (10.249) yield d df d 2 g df d d 2 g d dg d 2 f dg d d 2 f + − − dz dz dz2 dz dz dz2 dz dz dz2 dz dz dz2 d3 y = dx3



df dz

3

df d2 g dg d 2 f df 2 d df − 3 dz dz2 dz dz2 dz dz dz df 7 dz

; 10 265

once notation is simplified, Eq. (10.265) breaks down to d 2 f d 2 g df d 3 g + dz2 dz2 dz dz3 3

d y = dx3

=



d 2 g d 2 f dg d 3 f − dz2 dz2 dz dz3

df 3 df d 2 g dg d 2 f −3 − dz dz dz2 dz dz2 df dz

df 2 d 2 f dz dz2

7

d 2 f d 2 g df d 3 g d 2 g d 2 f dg d 3 f + − − dz2 dz2 dz dz3 dz2 dz2 dz dz3 df 5 dz

df df d 2 g dg d 2 f d 2 f −3 − dz dz dz2 dz dz2 dz2

10 266 along with division of both numerator and denominator by (df/dz)2 – where elimination of parentheses unfolds df d 2 f d 2 g df 2 d 3 g df d 2 f d 2 g + − dz dz2 dz2 dz dz3 dz dz2 dz2 3

d y = dx3



df dg d 3 f df d 2 f d 2 g dg d 2 f − 3 + 3 dz dz dz3 dz dz2 dz2 dz dz2 5 df dz

2

10 267

After dropping symmetrical terms, Eq. (10.267) finally becomes df dg d 3 f df 2 d 3 g df d 2 f d 2 g dg d 2 f − +3 −3 3 3 3 2 2 d y dz dz dz dz dz dz dz dz dz dz2 3 =− 5 dx df dz

2

10 268

Differentials, Derivatives, and Partial Derivatives

– where, once again, all outstanding terms may be calculated from Eqs. (10.249) and (10.250), i.e. df/dz, dg/dz, d2f/dz2, d2g/dz2, d3f/dz3, and d3g/dz3; higher-order derivatives are, nevertheless, of poor interest in practice, so no more iterations will be carried out in this process. 10.2.6

Basic Theorems of Differential Calculus

10.2.6.1 Rolle’s Theorem

In calculus, Rolle’s theorem essentially states that any real-valued differentiable function that attains equal values at two distinct points must have a stationary point somewhere in between – i.e. a point characterized by nil first-order derivative. The corresponding mathematical statement may be coined as f a =f b

c

a, b

df x dx

= 0,

10 269

x=c

provided that f {x} is continuous within (closed interval) [a,b] and differentiable within (open interval) ]a,b[; this is illustrated in Fig. 10.3. Inspection of this figure unfolds indeed the existence of (at least) one point where f {x} passes either through a maximum (as in the case plotted), or instead through a minimum. The above theorem has been named after Michel Rolle, the French mathematician who proved it in 1691 for polynomial functions; it has later been popularized by Moritz W. Drobisch in 1834 in Germany, and Giusto Bellavitis in 1846 in Italy – even though Indian mathematician Bhaskara II has been credited for its knowledge sometime in the twelfth century. In attempts to prove Eq. (10.269), one should first recall Weierstrass’ theorem, see Eqs. (9.184) and (9.185) – which states that a continuous function, such as f {x} within [a,b], will necessarily attain (at least once) its maximum, M, and its minimum, m, within said interval. If M = m, then f {x} is constant – so df/dx will be nil for every point belonging to ]a,b[. If m M > 0 and f {x} attains its maximum M at x = c, then f {c} = M; however, f {c} being the upper limit of f {x} implies that c+h

h

−M

f x

c+h

−f x

c

h

Figure 10.3 Graphical representation of function f {x}, continuous within interval [a,b] and differentiable within ]a,b[, and taking values f {a} = f{b} at the extremes thereof – with indication of tangent to its plot, at point (c,f{c}), exhibiting nil slope.

≥0

h < 0,

10 270

f{c} f{x}

f x

tangent

f{a} = f{b}

0

a

c

b x

331

332

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

as well as f x

−M

f x

−f x

c 10 271 ≤0 h>0 h h with h denoting a (small) increment – since f {x}|c + h − f {x}|c ≤ 0, irrespective of the sign of h. According to the conditions of validity for the theorem, the derivative of f {x} exists at x = c, so one may take the limits of both Eqs. (10.270) and (10.271) when h 0 to get c+h

f x

c+h

lim−

−f x

c+h

c

h

h 0

df x dx

c

df x dx

c

≥0

10 272

≤ 0,

10 273

and f x lim+

h 0

c+h

−f x

c

h

respectively; here Eq. (10.21) was retrieved to advantage. Equations (10.272) and (10.273) will be incompatible unless dfdxx

c

= 0, in agreement with Eq. (9.28) – so a point c inside

segment ]a,b[, with nil derivative, must indeed exist; a similar reasoning may then be applied should a minimum m exist within said interval, rather than a maximum M – since compatibility between the two emerging inequalities, now with f {x}|c + h − f {x}|c ≥ 0, will again be restricted to the nil solution. A particular case of Eq. (10.269) occurs when f x

a

=f x

b

= 0; under such

circumstances, there will be (at least) one zero of the derivative df/dx between two consecutive zeros of the original function f {x}. On the other hand, failure of differentiability at an interior point of the interval no longer guarantees validity of Rolle’s theorem. 10.2.6.2 Lagrange’s Theorem

Lagrange’s theorem – also known as mean value theorem, states that, given a planar arc describing a continuous function between two endpoints, there is at least one point at which the tangent to said arc is parallel to the secant through the endpoints; this may be more exactly expressed as c

a, b

df x dx

= x=c

f b −f a b−a

10 274

– applicable to a function f {x} continuous within [a,b] and differentiable within ]a,b[, as emphasized in Fig. 10.4. Note the existence of (at least) one point between a and b – in the specific case represented both c1 and c2 satisfy this requirement, where the tangent to the curve representing f {x} is parallel to the straight line defined by points (a, f {a}) and (b, f {b}). The aforementioned theorem appears to have been first described by Parameshvara, an Indian mathematician and astronomer of the fifteenth century – even though it has classically been associated with French mathematician Joseph-Louis Lagrange; a formal proof ensues, upon definition of an auxiliary function g{x} as g x

f x −f a − Φ x −a ,

10 275

Differentials, Derivatives, and Partial Derivatives

f{b} tangent

f{x}

Figure 10.4 Graphical representation of function f {x}, continuous within interval [a,b] and differentiable within ]a,b[, and taking values f {a} and f {b} at the extremes thereof – with indication of tangents to its plot, at points of abscissae c1 and c2, parallel to secant thereto at (a,f{a}) and (b,f{b}).

secant

tangent

f{a}

0 a

c1

c2

b

x

where constant Φ is given by Φ

f b −f a b −a

10 276

The secant in Fig. 10.4 satisfies indeed y=f x

x=a

+ Φ x− a ,

10 277

since it passes through point of abscissa a and ordinate f {a}, i.e. y

x=a

= f a + Φ a− a = f a

10 278

– as well as through point of coordinates b and f {b}, i.e. y

x=b

=f a +

f b −f a b −a = f a + f b − f a b− a

=f b ;

10 279

here Eq. (10.276) was taken advantage of, together with cancellation of b − a between numerator and denominator, followed by cancellation of f {a} with its negative. In view of Eq. (10.277), one may rewrite Eq. (10.275) as g x = f x − y;

10 280

hence, g{x} represents the difference in ordinate, for any given abscissa, of the curve in Fig. 10.4 and its secant at points of abscissae a and b, satisfying Eq. (10.277). In view of Eq. (10.280), one realizes that g{x} is continuous within [a,b] and differentiable within ]a,b[ – because both f {x} and y (for its being a straight line) are so, see Eqs. (9.139) and (10.106). Therefore, one may apply Rolle’s theorem as per Eq. (10.269), and conclude that there is a point c belonging to interval ]a,b[ such that dg x dx

= 0;

10 281

x=c

this is so because g{a} = g{b} = 0 after Eq. (10.280), as a consequence of a and b being the abscissae of the intercepts of the curve and the secant straight line. Differentiation of both sides of Eq. (10.280) with regard to x unfolds, in turn,

333

334

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

dg x df x dy df x = − = −Φ dx dx dx dx

10 282

due to Eq. (10.277), and thus dg x dx

= x=c

elimination of df x dx

df x dx

dg x dx

−Φ;

10 283

x=c

between Eqs. (10.281) and (10.283) gives rise to x=c

− Φ = 0,

10 284

x=c

or else Φ=

df x dx

10 285 x=c

Insertion of Eq. (10.285) finally converts Eq. (10.276) to Eq. (10.274) – thus supporting general validity of Lagrange’s theorem (once the underlying hypotheses are satisfied). If f {a} = f {b}, then Eq. (10.274) reduces to df x dx

= x=c

f a −f a = 0, b−a

10 286

thus recovering Eq. (10.269); therefore, Rolle’s theorem is a particular case of Lagrange’s theorem. Furthermore, Eq. (10.274) indicates that an increasing function, i.e. one for which f {b} − f {a} > 0 when b > a, holds a positive derivative, df {x}/dx, within any vicinity of c with amplitude b − a, as long as b a – since c must be comprised between a and b. By the same token, when b a, one notices that c a (or c b, for that matter), so Eq. (10.274) assures that a decreasing function in that interval [a,b] of infinitesimal amplitude, described by f {b} − f {a} < 0 for b > a, exhibits a negative derivative, df {x}/dx. One corollary of this realization is that a function cannot reverse monotony between two consecutive zeros of the corresponding derivative function – so f {x} either does not cross or crosses the horizontal axis once (at most) between those consecutive zeros of df {x}/dx; this is in general agreement with Bolzano’s theorem. Such a conclusion is useful when numerically searching for zeros of a function, should the zeros of its derivative be easier to find (as often happens with polynomial functions).

10.2.6.3 Cauchy’s Theorem

If, instead of a single function, two functions f{x} and g{x} are under scrutiny – both continuous in [a,b] and differentiable in ]a,b[, such that df {x}/dx 0 therein, then

c

a, b

df x dx dg x dx

x=c

x=c

=

f b −f a ; g b −g a

10 287

Differentials, Derivatives, and Partial Derivatives

(a)

(b) g{c} g{b}

f{a} f{c}

tangent

secant

0

f{x}

a

c

b

g{x}

f{x}, g{x}

g{c} g{b}

g{x}

g{a}

f{b}

0

f{a} f{c}

g{a}

f{b}

f{x}

x

Figure 10.5 Graphical representation of (a) functions f {x} and g{x}, both continuous within interval [a,b] and differentiable within ]a, b[, and taking values f {a} and f {b}, and g{a} and g{b}, respectively, at the extremes thereof – with indication of tangent to (b) the g{x}-vs.-f {x} plot, at point (a) with abscissa c and ordinate f {c} or, equivalently, (b) with coordinates (f{c},g{c}), parallel to secant thereto at (f {a},g{a}) and (f {b},g{b}).

Eq. (10.287) entails the formal statement of Cauchy’s theorem. Augustin-Louis Cauchy was a French mathematician – who lived in the nineteenth century, and was responsible for the modern notions of continuity in calculus; a graphical illustration of this theorem is provided in Fig. 10.5. Based on the two functions, f {x} and g{x}, plotted in Fig. 10.5a, g{f {x}} is plotted in Fig. 10.5b – where it becomes apparent that there is a point c, located between a and b, such that the tangent to the latter curve at ( f{c}, g{c}), with slope df{x}/dg{x}, or (df {x}/dx)/(dg{x}/dx) for that matter, is parallel to the secant characterized by intersection points ( f {a}, g{a}) and ( f {b}, g{b}). To prove the above theorem, one should start by defining an auxiliary constant, Ψ , according to f b −f a , 10 288 g b −g a which obviously requires g{b} g{a}; otherwise, g{b} would coincide with g{a}, thus implying that dg{x}/dx would be nil in some interior point as per Eq. (10.269) – in contradiction to one of Cauchy’s postulates. An auxiliary function h{x} may, in addition, be set as Ψ

f x −f a − Ψ g x −g a ;

h x

10 289

at x = a, Eq. (10.289) leads to h x

x=a

= f a −f a − Ψ g a − g a

=0

10 290

– and x = b similarly leads to h x

x=b

= f b −f a −

f b −f a g b −g a g b −g a

= f b − f a − f b −f a

= 0,

10 291

335

336

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with the aid of Eq. (10.288). Since h{a} = h{b}, as per Eqs. (10.290) and (10.291), constitutes the major postulate of Rolle’s theorem, and h{x} is continuous in [a,b] and differentiable in ]a,b[ – because it is defined via Eq. (10.289) as a linear combination of f {x} and g{x} that exhibit those features of continuity and differentiability in the first place, one concludes from Eq. (10.269) that there is a value c, comprised between a and b, such that dh x dx

=0

10 292

x=c

After differentiating Eq. (10.289) as dh x df x dg x = −Ψ dx dx dx

10 293

and taking x = c, i.e. dh x dx

= x=c

one may eliminate df x dx

df x dx

x=c

dh x dx

x=c

−Ψ x=c

−Ψ

dg x dx

dg x dx

,

10 294

x=c

between Eqs. (10.292) and (10.293) to get

=0

10 295

x=c

– or, upon isolation of Ψ , df x dx Ψ= dg x dx

x=c

;

10 296

x=c

insertion of Eq. (10.296) transforms Eq. (10.289) to df x dx h x = f x −f a − dg x dx

x=c

g x −g a ,

10 297

x=c

which degenerates to df x dx f b −f a − dg x dx

x=c

g b −g a

=0

10 298

x=c

when x = b, as enforced by Eq. (10.291). Isolation of the ratio of derivatives in Eq. (10.298) readily retrieves Eq. (10.287) – thus proving Cauchy’s theorem.

Differentials, Derivatives, and Partial Derivatives

Note that Cauchy’s theorem cannot be derived from Lagrange’s theorem applied to numerator and denominator of f {x}/g{x}, since one would end up with df x f b −f a dx f b −f a b−a = = g b −g a dg x g b −g a b−a dx

x = c1

10 299

x = c2

as supported by Eq. (10.274); this does not coincide with Eq. (10.287), unless c1 = c2 – which is not guaranteed a priori. Conversely, if g{x} looked like x

10 300

g a =a

10 301

g b = b,

10 302

g x – and thus

and as well as dg x dx

=1 x=c

x=c

= 1,

10 303

then Eq. (10.287) would be converted to Eq. (10.274) upon insertion of Eqs. (10.301)– (10.303); therefore, Lagrange’s theorem is a particular case of Cauchy’s theorem.

10.2.6.4 L’Hôpital’s Rule

Assume two functions, f {x} and g{x}, both of which are continuous in [a,b] and differentiable in ]a,b[ – and thus susceptible to application of Cauchy’s theorem; assume, in addition, that f x

a

=g x

a

=0

10 304

Although the quotient f {x}/g{x} is not defined at x = a by virtue of Eq. (10.304), this is (in principle) not the case in a vicinity thereof; nevertheless, df/dx and dg/dx are assumed to exist at x = a. Cauchy’s theorem as per Eq. (10.287) guarantees that there is at least one point c, belonging to vicinity ]a,x[ with a < x ≤ b, such that df x dx f x −f a = dg x g x −g a dx

c

; a < c < x;

10 305

c

in view of Eq. (10.304), one may simplify Eq. (10.305) to df x dx f x = dg x g x dx

c

c

10 306

337

338

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

When x a, one concludes that c a – because a < c < x, as seen above; upon application of said limit to both sides, Eq. (10.306) produces df x dx f x lim = lim x a g x x a dg x dx

c

c

df x dx = lim c a dg x dx

c

,

10 307

c

where a change of (dummy) variable from c to x in the second equality gives rise to df x f x lim = lim dx x ag x x a dg x dx

10 308 f a =g a =0

As discussed previously – see for instance Eqs. (9.19) and (9.23), existence of the limits of f {x} and g {x} as x a does not require f {x} and g{x} be defined at x = a; since f {a} and g {a} are not explicitly used in the equality labeled as Eq. (10.308), one may actually settle with lim f x = 0 and lim g x = 0 as per the continuity requirement – in which case x

a

x

a

Eq. (10.308) degenerates to df x f x = lim dx lim x ag x x a dg x dx

10 309 lim f x = lim g x = 0

x a

x a

The conditions of applicability of Eq. (10.309) are often lumped into lim

x

a

f x 0 = 0 g x

10 310

– in view of Eq. (9.94), where 0/0 is often referred to as unknown quantity. Equation (10.309) is usually known as l’Hôpital’s rule; although French mathematician Guillaume F. Antoine, Marquis de l’Hôpital, published said rule in 1696 in his book Analyse des Infinitement Petits pour l’Intelligence des Lignes Courbes – probably the first book ever written on differential calculus, it is believed that Swiss mathematician Johann Bernoulli was the first to prove it. It should be emphasized that, in general, (df/dx)/(dg/dx), as in the right-hand side of Eq. (10.309), differs from d(f/g)/dx = (g(df/dx) − f (dg/dx))/g2 in agreement with df x dg x Eq. (10.138). Furthermore, when lim = lim = 0, then the derivative-based x a dx x a dx approach conveyed by Eq. (10.309) can be sequentially applied for i = 1, then for i = 2, and so on as many times as necessary – according to d if x d i + 1f x i i+1 lim dx = lim idx x a d ig x x a d + 1g x dx i dx i + 1

; i = 1,2, …, i

i lim d f ix = lim d g x = 0 x a dx dx i

x a

10 311

Differentials, Derivatives, and Partial Derivatives

in attempts to ultimately calculate lim

x a

f x g x

Finally, one may resort to Eq. (10.311) even

d if x d ig x d i f x dx i and lim do not individually exist, as long as lim x a dx i x a dx i x a d i g x dx i exists. Despite the somewhat restricted form entailed by Eq. (10.309) – as only 0/0 is eligible, this realization may be misleading; in fact, l’Hôpital’s rule proves applicable to a much wider range of practical situations, where other types of unknown quantities arise, than expected at first sight – as explored below. If the accumulation point for the independent variable is not finite, viz.

when lim

lim f x = lim g x = 0,

x



then one may resort to z lim f

z

0

– because z

10 312



x

1 as auxiliary variable to transform Eq. (10.312) to x

1 1 = lim g =0 z 0 z z

10 313 ∞; one accordingly finds

0 when (in terms of reciprocal) x 1 z 1 d z = lim z 0 1 dg z 1 d z df

f x lim = lim x ∞g x z 0

1 z 1 g z f

d

d

1 z dz 10 314

1 z dz

with the aid of Eqs. (10.205) and (10.309) – where cancellation of common factors between numerator and denominator, and realization that 1z x will eventually lead to df x f x lim = lim dx x ∞g x x ∞ dg x dx

,

10 315

lim f x = lim g x = 0

x



x



thus extending validity of Eq. (10.309) to an infinite value of a. Transformations are often required on the dependent variable itself; for instance, if lim f x = lim g x = ∞,

x a

x

a

10 316

then one would be led to lim

x a

f x ∞ = ∞ g x

10 317

339

340

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

via straight application of Eq. (9.94) – again an undefined quantity, but not of the type in principle eligible for application of Eq. (10.309). However, the ratio in the left-hand side of Eq. (10.317) may be rearranged to 1 1 lim x a f x 0 g x g x lim = = , = lim 1 1 x a g x x a 0 lim f x x af x

10 318

at the expense of Eq. (10.316) – so application of Eq. (10.309) unfolds d 1 f x dx g x = lim lim x a g x x a d 1 dx f x

;

10 319

upon recalling the rule of differentiation of a reciprocal as per Eq. (10.139), complemented by Eq. (9.108), one gets f x lim = lim x a g x x a

1 g2 x 1 − 2 f x

dg x dg x f2 x dx dx = lim f x = lim x a x ag x df x df x g2 x dx dx



2

dg x lim dx , x a df x dx 10 320

and finally f x a g f lim x a g lim

x x x x

2

dg x 1 1 g x = = lim = lim = lim dx x a f x x a f x x a df x f x lim g x dx x a g x

10 321

after lumping factors alike between sides and applying Eq. (10.108) again. An alternative approach is, however, possible – starting with arbitration of points α and x in the vicinity of a such that, say, α < a < x; Cauchy’s theorem guarantees existence of c such that df x dx f x −f α = dg x g x −g α dx

c

,

10 322

c

as per Eq. (10.287) – with a < c < x. After factoring out f {x} or g{x} (as appropriate), Eq. (10.322) becomes f f x f g g x 1− g 1−

df x α dx x = α dg x x dx

c

c

;

10 323

Differentials, Derivatives, and Partial Derivatives

isolation of f {x}/g{x} then produces df x dx f x = dg x g x dx

g g f 1− f

1− c

c

α x α x

10 324

On the other hand, Eqs. (9.30), (9.86), and (9.94) have it that g α α g α 1− 1− g x lim x x a ∞ = 1 −0 = 1, = = α f α f α 1 −0 1− 1− x lim f x ∞

g g lim x a f 1− f 1−

x

10 325

a

after recalling Eq. (10.316) and realizing that f{α} and g{α} are both finite; according to Eq. (9.2) as definition of limit, Eq. (10.325) suggests g g f 1− f

1−

α x −1 < δ α x

10 326

as working vicinity, with δ being positive and sufficiently small – where definition of absolute value allows rewriting as g g 1−δ < f 1− f 1−

α x 0

df x 0 < dx −κ < δ dg x dx

f x −κ < ε g x

10 340

Differentials, Derivatives, and Partial Derivatives

from Eqs. (10.329) and (10.339) – as it suffices to define δ ≡ δ{ε} consistently with Eq. (10.337); recalling Eqs. (9.1) and (9.2), one finally attains df x f x lim = lim dx x ag x x a dg x dx

10 341 lim f x = lim g x = ∞

x

a

x a

from Eqs. (10.316) and (10.340) – thus extending Eq. (10.309) so as to encompass the df x unknown quantity ∞/∞, on the hypothesis that lim dx exists (being either finite x a dg x dx or infinite). If reciprocals were taken of both sides of Eq. (10.321), then one would get 1 1 = g x dg x lim x a f x lim dx x a df x dx

;

(10.342)

where Eq. (9.108) supports transformation to df x 1 f x 1 = lim dx ; = lim = lim x a g x x a dg x x a dg x g x lim x a f x dx dx df x dx

10 343

Eq. (10.343) mimics Eq. (10.341) in functional form – so the two alternative derivations above are equivalent, as expected. Consider now the case described by lim f x = 0

x a

lim g x = ∞;

x

a

10 344

this suggests that the limit of the product of f {x} by g{x} should read lim f x g x = 0 ∞

10 345

x a

upon direct application of Eq. (9.87) – i.e. another unknown quantity. Equation (10.345) may, nevertheless, be rearranged as lim f x g x = lim

x a

x a

f x 0 = , 1 0 g x

10 346

343

344

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

at the expense again of Eq. (10.344) – thus allowing immediate application of Eq. (10.309) to get df x dx lim f x g x = lim x a x a d 1 dx g x

;

10 347

using Eq. (10.139), one can manipulate Eq. (10.347) as df x dx , lim f x g x = lim x a x a 1 dg x − 2 dx g x

10 348

or else

lim f x g x = − lim

x

a

x

a

df x dx dg x dx

g2 x

10 349 lim f x = 0, lim g x = ∞

x

a

x a

following straightforward algebraic rearrangement – provided its right-hand side takes a (finite or infinite) known value. It is possible to proceed from Eq. (10.345) alternatively via lim f x g x = lim

x

a

x

a

g x ∞ = 1 ∞ f x

10 350

– in which case the unknown quantity ∞/∞ may be circumvented via application of Eq. (10.341) as dg x dx lim f x g x = lim x a x a d 1 dx f x

;

10 351

performance of the stated differentiations with the aid again of Eq. (10.139) unfolds dg x dx lim f x g x = lim x a x a 1 df x − 2 f x dx

10 352

that breaks down to

lim f x g x = − lim

x

a

x

a

dg x dx df x dx

f2 x

,

10 353

lim f x = 0, lim g x = ∞

x

a

x

a

and leads to the same result as that conveyed by Eq. (10.349), except if an unknown quantity remains.

Differentials, Derivatives, and Partial Derivatives

One may instead be faced with lim f x = ∞

x a

lim g x = ∞

x

10 354

a

when searching for the limit of the difference between f {x} and g{x}, i.e. lim f x −g x

x a

= ∞−∞

10 355

– written with the aid of Eq. (9.86). The (algebraic) undefined quantity in the right-hand side of Eq. (10.355), i.e. ∞ − ∞, can be overcome after preliminary algebraic manipulation as lim f x −g x

x a

= lim f x g x x a

1 1 − g x f x

10 356

supported by previous factoring out of f {x} and g{x}, complemented with lim f x −g x

x a

1 1 1 1 − − 0 −0 0 g x f x ∞ ∞ = = = lim = 1 1 x a 0 0 f x g x ∞

10 357

stemming from Eq. (10.354); application of Eq. (10.309) to Eq. (10.357) is now in order, according to

lim f x −g x

x a

d 1 1 − dx g x f x = lim x a d 1 dx f x g x

10 358

Differentiation of the numerator of Eq. (10.358) may proceed with the aid of Eq. (10.106), viz. lim f x −g x

x a

d 1 d 1 − dx g x dx f x = lim x a d 1 dx f x g x

,

10 359

whereas Eqs. (10.119) and (10.139) allow calculation of the outstanding derivatives as dg x − 2dx g x lim f x −g x

x a

= lim x a

df x − − 2dx f x

df x dg x g x +f x dx − dx f x g x 2

;

10 360

Eq. (10.360) eventually degenerates to

lim f x − g x

x

a

dg x df x − g2 x dx dx = lim x a dg x df x +g x f x dx dx f2 x

, lim f x = lim g x = ∞

x a

x

a

10 361

345

346

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

after multiplication of both numerator and denominator by – ( f {x}g{x})2. Undefined quantities may sometimes appear in the form of powers; this is the case of lim g x = ∞,

lim f x = 1

x

a

10 362

x a

in which case the limit of the corresponding power would look like lim f x

x

g x

a

= 1∞

10 363

– with 1∞ representing an unknown quantity of the power type. Application of logarithms to both sides of Eq. (10.363) leads, however, to ln lim f x

g x

x a

= lim ln f x x

g x

a

= lim g x ln f x = ∞ ln 1 = 0 ∞, x

a

10 364

where Eqs. (2.25) and (9.108) were taken into account – besides Eq. (10.362) obviously; the undefined quantity appearing in Eq. (10.364) is the same already found in Eq. (10.345), so a similar approach may be followed hereafter, i.e. ln lim f x

g x

x a

since

1 g x

= lim x

0 when g x

ln lim f x

g x

x a

a

ln f x 0 = 1 0 g x

10 365

∞, where Eq. (10.309) allows transformation to

d ln f x dx = lim x a d 1 dx g x

10 366

in view of 0/0 being an unknown quantity (of the algebraic type). After recalling Eqs. (10.139) and (10.212), one may transform Eq. (10.366) to

ln lim f x

g x

x a

df x dx f x = lim x a dg x − 2dx g x

df x dx ; = − lim x a dg x f x dx g2 x

10 367

exponentials may finally be taken of both left- and right-hand sides to reach

lim f x

x

a

g x

df x dx = exp − lim x a dg x f x dx g2 x

10 368 lim f x = 1, lim g x = ∞

x

a

x

a

Differentials, Derivatives, and Partial Derivatives

If one proceeds using ∞/∞ as intermediate unknown quantity, then Eq. (10.364) should be redone to ln lim f x x

g x

a

= lim x

a

g x ∞ = 1 ∞ ln f x

10 369

1 1 1 = = ∞ when x a, in agreement with Eqs. (9.108) and (10.362); ln f x ln 1 0 application of Eq. (10.341) is now in order, viz.

since

ln lim f x x

g x

a

dg x dx = lim x a d 1 dx ln f x

10 370

Differentiation of the denominator as reciprocal of a composite function may follow Eqs. (10.139) and (10.205) to yield

ln lim f x x

g x

a

dg x dx = lim x a d ln f x − 2dx ln f x

dg x dg x f x ln2 f x dx dx = − lim ; = − lim x a df x x a 1 df x dx f x dx 2 f x ln 10 371

after taking exponentials of both sides, Eq. (10.371) turns to

lim f x

x

g x

a

= exp − lim x

f x ln2 f x df x dx

a

dg x dx

10 372 lim f x = 1, lim g x = ∞

x

a

x

a

as alternative to Eq. (10.368), yet yielding an identical final result (if an unknown quantity does not persist). Another situation of interest arises when lim f x = 0 +

x a

lim g x = 0,

10 373

x a

in which case the limit of the corresponding power emerges as lim f x

g x

x a

= 00

10 374

following direct application of Eq. (9.108) – thus leading to another unknown quantity of the exponential type, i.e. The strategy is again to first apply logarithms to both sides of Eq. (10.374), i.e. ln lim f x x

a

g x

= lim ln f x x

a

g x

= lim g x ln f x = 0 ln 0 + = 0 − ∞ = − 0 ∞ , x

a

10 375

347

348

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where Eqs. (2.25), (9.108), and (10.373) were meanwhile taken on board; this result is analogous to Eq. (10.364), so one may proceed directly to

lim f x

x

g x

a

df x dx = exp − lim x a dg x f x dx g2 x

10 376 lim f x = lim g x = 0

x

a

x a

using Eq. (10.368) as template – or, equivalently,

lim f x

x

g x

a

f x ln2 f x

= exp − lim x

dg x dx

10 377

df x dx

a

lim f x = lim g x = 0

x a

x a

as suggested by Eq. (10.372). A third case corresponds to lim f x = ∞

x

lim g x = 0

a

x

10 378

a

– which, upon constructing a power using f {x} as base and g{x} as exponent, gives rise to another unknown quantity of the exponential type, i.e. lim f x

x

g x

a

= ∞ 0,

10 379

via direct application of the classical theorems on limits. As done before, logarithms of both sides of Eq. (10.379) may be taken to produce ln lim f x

g x

x a

= lim ln f x x

g x

a

= lim g x ln f x = 0 ln ∞ = 0 ∞ x

a

10 380

at the expense of Eqs. (2.25), (9.108), and (10.378); in view of the similarity of Eq. (10.380) to Eq. (10.364), one may either jump to

lim f x

x

g x

a

df x dx = exp − lim x a dg x f x dx g2 x

10 381 lim f x = ∞ , lim g x = 0

x

a

x a

as analogue of Eq. (10.368) via transformation of the initial unknown quantity to 0/0, or instead to

lim f x

x

g x

a

= exp − lim x

f x ln2 f x

a

df x dx

dg x dx lim f x = ∞ , lim g x = 0

x a

x a

10 382 in parallel to Eq. (10.372) via the (intermediate) unknown quantity ∞/∞. A somewhat related situation encompasses lim f x = 0 +

x

a

lim g x = ∞ ,

x

a

10 383

Differentials, Derivatives, and Partial Derivatives

associated with a power of the form lim f x

g x

x a

= 0∞

10 384

obtained via direct application of Eq. (9.108). After taking logarithms again of both sides, Eq. (10.384) turns to ln lim f x

g x

= lim ln f x

x a

x

a

g x

= lim g x ln f x = ∞ ln 0 + x

a

= ∞ −∞ = −∞ ∞ = −∞

10 385

with the aid of Eqs. (2.25), (9.108), and (10.383) – thus implying g x

lim f x

x

a

= e− ∞ = 0

10 386 lim f x = 0, lim g x = ∞

x a

x

a

after taking exponentials of both sides of Eq. (10.385); therefore, 0∞ is not an unknown quantity after all – unlike happened with ∞0 in Eq. (10.379).

10.2.7

Derivative of Matrix

Consider the (m × 1) vector column y, viz. y1 y

y2

,

10 387

ym obtained as y = Ax

10 388

– where A denotes an (m × n) matrix, i.e.

A

a1, 1 a1, 2

a1, n

a2, 1 a2, 2

a2, n

am, 1 am, 2

am, n

,

10 389

while x denotes an (n × 1) vector column, namely, x1 x

x2

;

10 390

xn consider also that A is independent of x. Since the ith element of y as given by Eq. (10.388) reads

349

350

Mathematics for Enzyme Reaction Kinetics and Reactor Performance n

yi =

10 391

ai, k xk k =1

in agreement with Eq. (4.47), it follows that ∂yi ∂ = ∂xj ∂xj

n

n

=

ai, k x k k =1

∂ ai, k x k = ∂xj k =1

j −1

n ∂ ∂ ∂ ai, k x k + ai, j xj + ai, k x k , ∂xj ∂xj ∂xj k =j+1 k =1

10 392 owing to the linearity of the differential operator as conveyed by Eq. (10.107); one finally finds ∂yi = ai, j ; i = 1, 2,…, m; j = 1, 2,…, n, ∂xj

10 393

since ai,j in Eq. (10.392) does not depend on xj by hypothesis – or else ∂y ∂x

∂ A x = A, ∂x

10 394

in a more condensed fashion and encompassing all elements of y, besides retrieving Eq. (10.388). Suppose now that x is a function of (n × 1) vector z, defined as z1 z

z2

10 395

… zn

– while A remains independent of z; differentiation of both sides of Eq. (10.391) with regard to zj accordingly yields ∂yi ∂ = ∂zj ∂zj

n

n

ai, k xk k =1

=

∂ ai, k xk = ∂z j k =1

n

ai, k k =1

∂xk ; i = 1, 2,…, m; j = 1, 2,…,n, ∂zj 10 396

again upon exchange of differential and summation operators, while recalling Eq. (10.120) – which is equivalent, in general, to ∂y ∂z

∂ Ax z ∂z

=A

∂x ∂y ∂x = , ∂z ∂x ∂z

10 397

after retrieving the algorithm of multiplication of matrices as per Eq. (4.47), and coupling with Eqs. (10.388) and (10.394). Consider now a scalar ω that relates to (n × 1) column vectors x, as per Eq. (10.390), and y, as per Eq. (10.387) with m replaced by n, via ω

yT x

10 398

Differentials, Derivatives, and Partial Derivatives

– where x ≡ x{z} and y ≡ y{z}, with vector z still given by Eq. (10.395); one then realizes that n

ω=

10 399

y i xi i=1

as per Eqs. (4.47) and (10.398). Hence, differentiation of both sides of Eq. (10.399) with regard to zk produces ∂ω ∂ = ∂zk ∂zk

n

n

y i xi = i=1

n ∂ ∂yi ∂xi yi xi = xi + yi ∂z ∂z ∂z k k k i=1 i=1

10 400

n

n ∂yi ∂xi = xi + yi ; k = 1,2, …, n ∂z ∂z k k i=1 i=1

at the expense of Eqs. (10.107) and (10.119). A more condensed version of Eq. (10.400) looks like ∂ω ∂z

∂ y z ∂z

T

x z

= xT

∂y ∂x ∂ω ∂y ∂ω ∂x + yT = + ∂z ∂z ∂y ∂z ∂x ∂z

10 401

with the aid of Eqs. (4.47) and (10.398). Note that ∂ω ∂ = ∂yk ∂yk

n

yi xi = i=1

∂ ∂yk

k −1

n

yi xi + yk xk + i=1

y i xi = i=k +1

∂ yk xk = xk ∂yk 10 402

based on Eq. (10.399), and since xk is independent of yk; and similarly ∂ω ∂ = ∂xk ∂xk

n

y i xi = i=1

∂ ∂xk

k −1

n

yi xi + yk xk + i=1

yi xi = i=k +1

∂ yk xk = yk ∂xk 10 403

as yk is a fortiori independent of xk. Therefore ∂ω = xT ∂y

10 404

stemming from Eq. (10.402), as well as ∂ω = yT ∂x based on Eq. (10.403) – thus justifying the last equality in Eq. (10.401). If scalar ω is instead defined by ω

y T Ax

10 405

10 406

– with A satisfying Eq. (10.389), and being independent of both x as per Eq. (10.390) and y as per Eq. (10.387), then one may for convenience define auxiliary (n × 1) column vector w as wT

y T A;

10 407

this allows reformulation of Eq. (10.406) to ω

wT x

10 408

351

352

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

One may now retrieve the result conveyed by Eq. (10.394) to write ∂ω 10 409 = wT , ∂x where ω and wT now play the role of y and A, respectively – while insertion of Eqs. (10.406) and (10.407) produces ∂ω ∂x

∂ T y Ax = y T A ∂x

10 410

On the other hand, ω being a scalar allows Eq. (10.406) to be rephrased as ω = ω T = y T Ax

T

= xT AT yT

T

= x T A T y,

10 411

at the expense of definition of transposal of a matrix as per Eq. (4.105), rule of transposal of product of matrices as per Eq. (4.120), and composition of transposal with itself as per Eq. (4.110); one may now invoke Eqs. (10.401), (10.406), and (10.411) to write ∂ω ∂y

∂ T ∂ T T y Ax = x A y = xT AT ∂y ∂y

10 412

The situation where y coincides with x and A is an (n × n) square matrix – also known as quadratic form, is of particular practical interest, as it transforms Eq. (10.406) to ω

x T Ax,

10 413

with A once more independent of x; according to the algorithm of multiplication of matrices, see Eq. (4.47), n

n

xT A =

xk ak , 1 k =1

n

x k ak , n ,

xk ak , 2 k =1

10 414

k =1

while a second application of Eq. (4.47) permits calculation of the product of xTA by x, viz. n

n

x T Ax =

ak , 1 x k x 1 + k =1 n

n

ak , 2 xk x2 + k =1

+

ak , n xk xn k =1

10 415

n

=

ai, j xi xj j=1 i=1

Differentiation of ω as given by Eqs. (10.413) and (10.415), with regard to xk, may then be calculated as k −1

k −1

n

ai, j xi xj + ak , j xk xj + j=1

∂ω ∂ = ∂xk ∂xk

n

n

ai, j x i x j = j=1 i=1

∂ ∂xk

i=1

ai, j xi xj i=k +1

k −1

n

ai, k xi xk + ak , k xk xk +

+ i=1 n

ai, k xi xk i=k +1

k −1

+

n

ai, j xi xj + ak , j xk xj + j=k +1

i=1

ai, j xi xj i=k +1

10 416

Differentials, Derivatives, and Partial Derivatives

– where the double summations were criteriously splitted for mathematical convenience; the differential and summation operators may be swapped due to the intrinsic linearity enforced by Eq. (10.107), thus leading to k −1 ∂ω k − 1 ∂ ∂ ∂ = ak , j xk xj + ai, k xi xk + ak , k x k 2 ∂xk j = 1 ∂xk ∂x ∂x k k i=1

10 417

n

n ∂ ∂ + ai, k xi xk + ak , j xk xj ∂xk ∂xk i=k +1 j=k +1

– where advantage was meanwhile taken of the fact that ∂ζ/∂xk = 0, when xk does not appear explicitly in ζ. Application of the rule of differentiation of a product, labeled as Eq. (10.119), to Eq. (10.417) gives rise to k −1 n n ∂ω k − 1 = ak , j xj + ai, k xi + 2ak , k xk + ai, k xi + ak , j x j ∂xk j = 1 i=1 i=k +1 j=k +1 k −1

=

k −1

n

ak , j x j +

ak , j xj + ak , k xk + j=1

i=1

j=k +1

n

n

ak , j xj +

= j=1

n

i=1

ai, k xi

,

i=k +1

n

xj aj, k T +

ai, k xi =

n

ai, k xi + ak , k xk +

j=1

xi ai, k ; k = 1, 2,…,n i=1

10 418 together with some algebraic rearrangement complemented by use of Eq. (4.105) – which withstands a matrix equivalent looking like ∂ω = xT AT + xT A ∂x

10 419

in view of Eq. (4.47); after retrieving Eq. (10.413) and factoring xT out, one finally obtains ∂ω ∂x

∂ T x Ax = x T A T + A ∂x

10 420

from Eq. (10.419). A similar derivative of xTAx may be put forward, but using xT as independent variable rather that x; in this case, Eq. (10.420) takes the form ∂ x T Ax ∂x T =

A

T T

+A

T

T

∂ T x Ax ∂x

= xT AT + A T

x= A+A

T

= AT + A

T

xT

T

,

10 421

x

at the expense of Eqs. (4.110), (4.114), (4.120), and (10.420). In the particular case where A is symmetric, Eq. (10.420) degenerates to

353

354

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

∂ T x Ax = x T A + A = 2x T A ∂x

10 422

in view of Eqs. (4.16) and (4.24), since AT = A as per Eq. (4.107); similarly, Eq. (10.421) would break down to ∂ x T Ax = A + A x = 2Ax ∂x T

10 423

Another possibility may be conceived stemming from Eq. (10.406) but with x ≡ x{z} and y ≡ y{z} – while A remains independent of z. Under such circumstances, Eq. (10.401) becomes ∂ω ∂z

∂ T ∂ ∂w ∂x y Ax = w T x = xT + wT ∂z ∂z ∂z ∂z

10 424

after replacing y by yTA, and then yTA by wT in agreement with Eq. (10.407); following combination with Eq. (10.401) after replacing x by A as per Eq. (10.407), one can rewrite Eq. (10.424) as ∂ω ∂y ∂A ∂x = xT AT + yT + yT A ∂z ∂z ∂z ∂z

10 425

– or else ∂ω ∂z

∂ y z ∂z

T

Ax z

= xT AT

∂y ∂x + yT A , ∂z ∂z

10 426

since A is, by hypothesis, independent of z. Consider now the product of two (compatible) matrices, A and B, i.e. Y = AB,

10 427

with the latter (n × p) matrix defined as

B

b1, 1 b1, 2

b1 , p

b2, 1 b2, 2

b2 , p

bn , 1 bn , 2

bn , p

,

10 428

A abiding to Eq. (10.389), and (m × p) matrix Y defined as

Y

y1, 1 y1, 2

y1, p

y2, 1 y2, 2

y2, p

ym, 1 ym, 2

ym, p

10 429

Differentials, Derivatives, and Partial Derivatives

– besides both A and B being dependent on scalar parameter ω; the element of Y located in the ith row and jth column will thus read n

yi, j =

10 430

a i , k bk , j k =1

as per Eq. (4.47). The derivative of yi, j with regard to ω looks like ∂yi, j ∂ = ∂ω ∂ω n

= k =1

n

n ∂bk , j ∂ ∂ai, k ai, k bk , j = bk , j + a i , k ∂ω ∂ω ∂ω k =1 k =1 n

a i , k bk , j = k =1

n ∂bk , j ∂ai, k bk , j + ai, k ∂ω ∂ω k =1

10 431 in view of Eqs. (10.107), (10.119), and (10.430); a more condensed form is possible, viz. ∂Y ∂ω

∂ ∂A ∂B AB = B+A , ∂ω ∂ω ∂ω

10 432

with the aid of Eqs. (4.47), (10.389), and (10.428). A useful application of the above result encompasses calculation of the derivative of A−1 – on the hypothesis that A is an (m × m) nonsingular matrix, with constitutive elements dependent on scalar parameter ω; recalling the definition of inverse matrix as per Eq. (4.124), one may differentiate both sides to get ∂ ∂I m A −1 A = = 0m × m 10 433 ∂ω ∂ω since Im, is, by definition, independent of ω – whereas Eq. (10.432) has it that ∂ ∂A − 1 ∂A A −1 A = A + A −1 , 10 434 ∂ω ∂ω ∂ω after replacing A by A−1 and B by A. Elimination of ∂(A−1A)/∂ω between Eqs. (10.433) and (10.434) unfolds ∂A − 1 ∂A = 0m × m , A + A −1 ∂ω ∂ω or else ∂A − 1 ∂A ∂A = −A − 1 A = 0m × m − A −1 ∂ω ∂ω ∂ω

10 435

10 436

after moving A−1∂A/∂ω to the right-hand side; post-multiplication of both sides by A−1 transforms Eq. (10.436) to ∂A −1 ∂A −1 ∂A −1 ∂A −1 A A A−1 = AA −1 = Im = − A−1 ∂ω ∂ω ∂ω ∂ω

10 437

with the aid of Eqs. (4.56) and (4.124), which readily becomes ∂A − 1 ∂A −1 A Im = − A−1 ∂ω ∂ω as per Eq. (4.61).

10 438

355

356

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

10.2.8

Derivative of Determinant

Remember Laplace’s theorem applied to an (n × n) matrix A ≡ A {α,β} as per Eq. (6.41), dependent on scalars α and β; since A is a scalar directly related to the (scalar) elements ai,j{α,β} of A, i.e. A ≡ A ai,j{α,β} for i = 1, 2, …, n and j = 1, 2, …, n, one may apply the chain differentiation rule as per Eq. (10.205) to write ∂ A = ∂α

n i=1

∂ A ∂ai, j ∂ai, j ∂α j=1 n

10 439

as derivative of A with regard to α. Laplace’s theorem allows index i pertaining to rowbased expansion be chosen at will, so one may write ∂ A ∂ n = −1 ∂ai, j ∂ai, j k = 1

i+k

ai, k A − i, − k

10 440

with the aid of Eq. (6.41); linearity of the differential operator as per Eq. (10.107), coupled with the rule of differentiation of a product as per Eq. (10.119) then yields ∂ A = ∂ai, j

n

∂ ∂a i, j k =1

−1

n

i + k ∂ai, k

−1

=

i+k

∂ai, j

k =1

ai, k A − i, − k n

A −i, −k +

−1

i+k

k =1

ai, k

10 441

∂ A −i, −k ∂ai, j

Determinant |A−i, − k| will not be a function of ai,j since (by definition) the ith row is missing; therefore, ∂ A − i, − k =0 ∂ai, j

10 442

Equation (10.442) supports simplification of Eq. (10.441) to n ∂ A = −1 ∂ai, j k = 1

i+k

A − i, − k

∂ai, k ∂ai, j

A − i, − k

∂ai, k + −1 ∂ai, j

j− 1

−1

=

i+k

k =1

i+j

A −i, −j

n ∂ai, j + −1 ∂ai, j k = j + 1

i+k

A −i, −k

∂ai, k ∂ai, j

,

10 443 along with splitting of the outstanding summation for convenience. Furthermore, all elements of A are independent of each other by hypothesis, so ∂ai, k = δk , j ∂ai, j

10 444

where δk,j denotes Kronecker’s delta function defined by Eq. (6.167) – thus implying that Eq. (10.443) reduces to merely ∂ A = −1 ∂ai, j

i+j

A − i, − j ,

10 445

Differentials, Derivatives, and Partial Derivatives

with concomitant simplification of Eq. (10.439) to ∂A = ∂α

n

n

−1

i+j

A − i, −j

i=1 j=1

∂ai, j = ∂α

n

n

−1 j=1 i=1

j+i

A −j, −i T

∂ai, j ∂α

10 446

On the other hand, if B and C are two (n × n) matrices, then the algorithm of their product, labeled as Eq. (4.47), supports n

BC

bj, i ci, k ; j = 1,2, …, n; k = 1,2, …, n;

10 447

i=1

after taking traces of both sides, Eq. (10.447) becomes n

n

tr BC =

10 448

bj, i ci, j j=1 i=1

Comparative inspection of Eqs. (10.446) and (10.448) indicates that j+i

bj, i = − 1

A −j, −i T

10 449

and ci, j =

∂ai, j ∂α

10 450

support the statement ∂ A ∂A = tr A ∂α ∂α

10 451

– where Eq. (6.171) was recalled, as per the definition of Â. Multiplication and division by A in the argument of the trace in the right hand side transforms Eq. (10.451) to ∂ A = tr ∂α

A

A ∂A ; A ∂α

10 452

the inverse matrix may be expressed as  divided by A in agreement with Eq. (6.178), so one may rewrite Eq. (10.452) as ∂ A = tr ∂α

A A−1

∂A ∂α

10 453

The linear nature of the trace operator permits A be taken out for being a scalar, thus transforming Eq. (10.453) to ∂ A ∂A = A tr A − 1 ; ∂α ∂α

10 454

Eq. (10.454) is often known as Jacobi’s formula, and applies also to partial differentiation with regard to β – after mere swapping of α by β. Once in possession of Eq. (10.454), one may proceed to the second (cross) derivative of A as ∂2 A ∂β∂α

∂ ∂ A ∂β ∂α

=

∂ ∂β

A tr A − 1

∂A ∂α

10 455

357

358

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– where the rule of differentiation of a product of scalars yields ∂2 A ∂ A ∂A ∂ ∂A tr A −1 = tr A − 1 + A ∂α ∂β ∂α ∂β∂α ∂β

,

10 456

consistent with Eq. (10.119); both trace and derivative are linear operators, so exchange thereof may be effected as ∂2 A ∂ A ∂A ∂ ∂A A −1 = tr A − 1 + A tr ∂α ∂β ∂α ∂β∂α ∂β

,

10 457

while differentiation of a product of matrices may to advantage resort to Eq. (10.432) to yield ∂2 A ∂ A ∂A ∂A −1 ∂A ∂2 A + A −1 = tr A − 1 + A tr ∂α ∂β∂α ∂β ∂α ∂β∂α ∂β

10 458

After recalling the results labeled as Eqs. (10.438) and (10.454), one may reformulate Eq. (10.458) to ∂2 A ∂A ∂A ∂A ∂A ∂2 A + A −1 = A tr A − 1 tr A − 1 + A tr −A −1 A − 1 , ∂β ∂α ∂β ∂α ∂β∂α ∂β∂α 10 459 where A may, in turn, be factored out to give ∂2 A = A ∂β∂α

tr A −1

∂A ∂A ∂A ∂A ∂2 A + A −1 tr A −1 + tr −A − 1 A −1 ∂β ∂α ∂β ∂α ∂β∂α

;

10 460 Eq. (10.460) frequently appears as ∂2 A = A ∂β∂α

tr A −1

∂A ∂A ∂A ∂A ∂2 A tr A −1 −tr A −1 A −1 + tr A − 1 ∂β ∂α ∂β ∂α ∂β∂α

,

10 461 since the trace of a sum of matrices is equal to the sum of traces of said matrices in view of Eqs. (4.4) and (4.94).

10.3

Dependence Between Functions

Two continuous univariate functions, say, f {x} and g{x}, are said to be linearly independent of each other when k1 f x + k 2 g x = 0

10 462

can be satisfied only by taking k1 = k 2 = 0

10 463

Differentials, Derivatives, and Partial Derivatives

Equation (10.462) being identically satisfied implies that k1

df x dg x + k2 =0 dx dx

10 464

should also be satisfied, on the hypothesis that both f {x} and g{x} are differentiable – and as obtained via differentiation of both sides, with the aid of Eq. (10.122). The set constituted by Eqs. (10.462) and (10.464) may be rewritten in matrix form as f x g x k1 0 ; 10 465 = df x dg x k2 0 dx dx Eq. (10.465) is to be solved for k1 and k2 as unknowns by resorting to Cramer’s rule as conveyed by Eq. (7.59), thus attaining 0 g x dg x dx k1 = f x g x 0

df x dx

dg x −0g x 0 dx = = W W 0

10 466

dg x dx

coupled with 0

f x

df x 0 dx k2 = f x g x df x dx

0f x − 0 =

W

df x dx =

0 W

10 467

dg x dx

– where the rule of calculation of a second-order determinant, labeled as Eq. (1.10), was taken advantage of. The determinant in denominator of Eqs. (10.466) and (10.467) is called determinant of the Wronskian matrix, W, associated with functions f {x} and g{x}, and is defined as W

f x

g x

df x dx

dg x dx

;

10 468

this concept as introduced in 1776 by Józef M. Hoene-Wrónski, a Polish Messianist philosopher and mathematician, and so named in 1882 by Sir Thomas Muir, a Scottish mathematician. Based on Eqs. (10.466) and (10.467), one concludes that k 1 = k2 = 0

W

0

10 469

– i.e. k1 = k2 = 0 is guaranteed as the sole solution provided that the Wronskian 0 holds as condition for linear indedeterminant is significant; in other words, W pendence of f {x} and g{x}. The above rationale may be readily extended to any number of functions, say, f1{x}, f2{x}, …, fn{x}, via enforcement of

359

360

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

f2 df2 dx

f1 df1 dx

W

fn dfn dx

d n− 1 f1 d n− 1 f2 dx n−1 dx n− 1

0,

10 470

d n−1 fn dx n− 1

encompassing a generalized nth order Wronskian determinant – which contains derivatives up to the (n − 1)th order. Suppose now that f {x,y} and g{x,y} are two continuous functions of independent variables x and y, bearing continuous first-order partial derivatives; said functions are said to be functionally dependent from each other when a relationship of the type F f x,y , g x, y

=0

10 471

exists, which holds true for every x and y within the region of definition of f {x,y} and g{x,y}. One consequence of the two sides of Eq. (10.471) being identically equal is dF f ,g = 0

10 472

consistent with Eq. (10.1), or else ∂F ∂F dx + dy = 0 ∂x ∂y

10 473

stemming from Eq. (10.6); consequently, one finds that ∂F ∂F = =0 ∂x ∂y

10 474

is required to guarantee (general) validity of Eq. (10.472), irrespective of the actual variation in independent variables dx and dy. A similar reasoning supports ∂F ∂F df + dg = 0 ∂f ∂g

10 475

when applied to Eq. (10.471), so as to satisfy Eq. (10.472) – using the alternative formulation of intermediate functions suggested by Eq. (10.20); division of both sides by dx transforms Eq. (10.475) to ∂F ∂f ∂F ∂g + = 0, ∂f ∂x ∂g ∂x

10 476

and likewise ∂F ∂f ∂F ∂g + =0 ∂f ∂y ∂g ∂y

10 477

when division by dy is performed instead. After rewriting Eqs. (10.476) and (10.477) in matrix form as ∂f ∂x ∂f ∂y

∂g ∂x ∂g ∂y

∂F ∂f ∂F ∂g

=

0 0

,

10 478

Differentials, Derivatives, and Partial Derivatives

one may again invoke Cramer’s rule to write 0 0 ∂F = ∂f ∂f ∂x ∂f ∂y

∂g ∂x ∂g ∂g ∂g 0 −0 0 ∂y ∂y ∂x = = ∂g J J ∂x ∂g ∂y

10 479

complemented by ∂f ∂x ∂f ∂F ∂y = ∂f ∂g ∂x ∂f ∂y

0 ∂f ∂f −0 0 ∂x ∂y = ; = ∂g J J ∂x ∂g ∂y 0

0

10 480

the determinant in denominator of Eqs. (10.479) and (10.480) is termed determinant of the Jacobian matrix, J, of functions f {x,y} and g{x,y} – defined, in turn, as

J

∂ f ,g ∂ x, y

∂f ∂x ∂f ∂y

∂g ∂x ∂g ∂y

10 481

One concludes from Eq. (10.479) and (10.480) that ∂F ∂F = =0 ∂f ∂g

10 482 J

0

is the sole solution of Eq. (10.472), when J is significant – since J = 0 would lead to unknown quantities of the type 0/0; in other words, ∂F/∂f = ∂F/∂g = 0 will not be the single solution unless the associated Jacobian determinant is non-nil. For a set of n functions, f1{x1, x2, …, xn}, f2{x1, x2, …, xn}, …, fn{x1, x2, …, xn}, functional independence will therefore exist when

J

∂ f1 , f2 , …, fn ∂ x1 , x2 , …, xn

∂f1 ∂x1 ∂f1 ∂x2

∂f2 ∂x1 ∂f2 ∂x2

∂fn ∂x1 ∂fn ∂x2

∂f1 ∂f2 ∂xn ∂xn

∂fn ∂xn

0

10 483

361

362

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– where the generalized Jacobian determinant encompasses first-order partial derivatives of f1, f2, …, fn with regard to x1, x2, …, xn. Note that functional independence is a more general concept than linear independence – in that Eq. (10.471) is less constrained in the algebraic operations contained under label F than the strict linear combination underlying Eq. (10.462).

10.4

Optimization of Univariate Continuous Functions

The ultimate purpose of optimization is to determine the best (e.g. the most profitable) state of a system, or set of operating conditions for a process; normally, an infinite number of solutions exist, so the choice of one particular solution should follow some objective criterion. The case of interest here pertains to functions of a single variable that exhibit a (local) point of maximum or minimum not coincident with the boundaries of the range of interest – which may (or not) be subjected to mathematical restrictions, typically arising from physicochemical constraints.

10.4.1

Constraint-free

If f {x} denotes the objective function of interest on independent variable x, defined within interval [a,b] (with b > a), then the necessary condition for an optimum (or stationary) point to exist at x = ζ looks like df x dx

ζ

= 0; a < ζ < b;

10 484

despite being necessary, Eq. (10.484) does not convey a sufficient condition for an optimum (since an inflection point may arise as well) – nor does it discriminate its nature, i.e. minimum or maximum, in the case of a true local optimum. To appropriately address this issue, suppose that the derivatives of every order of f {x} – from the first-order, as included in Eq. (10.484), up to the nth-order exist and are nil at x = ζ; assume, in addition, that the (n + 1)th-order derivative exists and is continuous in a vicinity of x = ζ, but is the first one taking a non-nil value in interval ]ζ,x[ or ]x,ζ[ (whatever appropriate). One may accordingly express the said function via Taylor’s expansion around x = ζ, viz. d n + 1f x dx n + 1 f x −f x = ζ n+1

ξ

x− ζ

n+1

; ζf x

>0

f x >f x

ξ

ζ

; x > ζ,

10 486

; x0 ξ

;

10 488

1 ≤ i ≤ n odd; ζ < ξ < x x < ξ < ζ Eq. (10.488) constitutes the sufficient condition for a local minimum of f {x} at x = ζ, which already encompasses Eq. (10.484) when i = 1. By the same token, d n + 1f x dx n + 1

ξ

d n + 1f x dx n + 1

ξ

c), then the necessary condition for an optimum at point of coordinates (ζ,ξ) – with a < ζ < b and c < ξ < d, in the x-direction is analogous to Eq. (10.484), viz.

367

368

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

∂f ∂x

= 0,

∂f ∂x

because

10 523

y ζ, ξ

∂f ∂y

behaves as a univariate function in x; by the same token, y

=0 x

10 524

ζ, ξ

∂f ∂y x represents a univariate function in y. Unlike a univariate function that evolves in a single direction (say, x), there are now infinite directions for the potential variation of f {x,y}; hence, one should set conditions, similar to either Eq. (10.523) or Eq. (10.254), to encompass every such s-direction – thus guaranteeing that (ζ,ξ) is an actual extremum. One may accordingly resort to the definition of directional derivative, viz. conveys the necessary condition for an optimum at (ζ,ξ) in the y-direction, since

∂f ∂s

ζ, ξ

∂f ∂x

ζ, ξ

cos θ +

∂f ∂y

ζ, ξ

sin θ,

10 525

where θ denotes angle formed between the s- and x-directions, in agreement with Eq. (10.101) – which, in view of Eqs. (10.523) and (10.524), coupled with the bounded nature of sin θ and cos θ, give rise to ∂f ∂s

= 0;

10 526

ζ, ξ

hence, Eqs. (10.523) and (10.524) are indeed equivalent to Eq. (10.526), as long as a necessary condition for extremum behavior is concerned. The nature of the aforementioned extremum should again be ascertained via the behavior of higher order partial derivatives – so it is convenient to expand f {x,y} as a bivariate Taylor’s series about (ζ,ξ), according to f x,y − f x, y

ζ, ξ

=

∂f x, y ∂x +

ζ, ξ

∂ 2 f x, y ∂x2

∂ 2 f x, y + ∂y∂x

x− ζ +

∂f x,y ∂y 2

ζ, ξ

∂ 2 f x, y ∂x∂y

y−ξ

ζ, ξ

x− ζ 2

ζ, ξ

y −ξ x −ζ ∂ 2 f x,y + 2 ∂y2

+

ζ, ξ

x −ζ y −ξ 2

ζ, ξ

y−ξ 2

2

;

10 527

3

+ β0

x− ζ 2 + y −ξ

2

x− ζ 2 + y − ξ 2 tends to zero, and plays a role similar to that of here β0 tends to 0 when Lagrange’s remainder, after taking Eqs. (10.4) and (10.5) into account. The hypotheses

Differentials, Derivatives, and Partial Derivatives

laid down by Eqs. (10.523) and (10.524), coupled with Schwarz’s theorem on the similarity of cross derivatives of continuous functions permit simplification of Eq. (10.527) to f x, y − f x, y

ζ, ξ

=

β11 β x− ζ 2 + β12 x− ζ y − ξ + 22 y − ξ 2 + β0 ρ3 ; 2 2

10 528

auxiliary constants β11, β12, β13, and ρ are, for convenience, defined as β11

∂ 2 f x, y ∂x2

ζ, ξ

β12

∂ 2 f x, y ∂x∂y

ζ, ξ

β22

∂ 2 f x, y ∂y2

ζ, ξ

,

10 529

,

10 530

,

10 531

and x− ζ 2 + y −ξ

ρ

2

10 532

– with ρ accounting for distance of point (x,y) to reference point (ζ,ξ). On the other hand, trigonometric considerations consistent with Eqs. (2.288) and (2.290) coupled with Fig. 2.10a, and Pythagoras’ theorem as per Eq. (2.431) allow one to write cos θ =

x− ζ ρ

10 533

sin θ =

y− ξ ρ

10 534

and

– since θ describes amplitude of the angle formed by the straight line defined by points (0,0) and (ζ,ξ) with the horizontal axis; Eqs. (10.533) and (10.534) support transformation of Eq. (10.528) to f x,y − f x, y

ζ, ξ

=

β11 2 2 β ρ cos θ + β12 ρ2 sin θ cos θ + 22 ρ2 sin2 θ + β0 ρ3 2 2

10 535

or, after factoring out ρ2/2, f x,y − f x, y

1 = ρ2 β11 cos2 θ + 2β12 sin θ cos θ + β22 sin2 θ + 2β0 ρ 2 ζ, ξ 10 536

Algebraic rearrangement of the expression in parenthesis, via addition and subtraction of β12 2 2 β11 sin θ,

unfolds

f x, y − f x, y

ζ, ξ

1 = ρ2 2

β11 cos2 θ + 2β12 sin θ cosθ +

β12 2 2 sin θ β11

β 2 − 12 sin2 θ + β22 sin2 θ + 2β0 ρ β11

10 537

369

370

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– whereas factoring out of 1/β11 separately in the first three and fifth terms in parenthesis leads, in turn, to

f x, y −f x, y

1 = ρ2 ζ, ξ 2

β11 2 cos2 θ + 2β11 β12 sin θ cosθ + β12 2 sin2 θ β11 β 2 sin2 θ β11 β22 sin2 θ − 11 + + 2β0 ρ β11 β11

,

10 538 on the obvious assumption that β11 0; Newton’s binomial theorem, as conveyed by Eq. (2.237), may be called upon to transform Eq. (10.538) to

f x, y −f x, y

1 = ρ2 2

β11 cosθ + β12 sin θ β11

2

β11 cosθ + β12 sin θ β11

2

, 10 539 β11 β22 −β12 2 sin2 θ + + 2β0 ρ β11 where the second and third fractions were meanwhile collapsed, and sin2 θ factored out. Note that (β11cos θ + β12sin θ)2, sin2 θ, and ρ2/2 in Eq. (10.539) take only nonnegative values, whereas the second term therein may be written in determinant form as f x, y −f x, y

ζ, ξ

1 = ρ2 2 ζ, ξ

+

sin2 θ β11 β12 + 2β0 ρ , β11 β12 β22 10 540

after retrieving Eq. (1.10). Various possibilities for the sign of the determinant in β11 β12 Eq. (10.540) are now to be explored; if > 0 and β11 > 0, then the first two terms β12 β22 in parenthesis are nonnegative – so, for ρ sufficiently small, i.e. in the immediate vicinity of (ζ,ξ), one finds that

lim f x, y −f x, y

ρ

0

ζ, ξ

1 = lim ρ2 2ρ 0

β11 cosθ + β12 sin θ β11

2

sin2 θ β11 β12 + + 2β0 ρ β11 β β 12 22

>0 β11 > 0 β11 β12 β12 β22

>0

10 541 irrespective of the sign of β0ρ, because (as seen above) β0 (10.529)-(10.531) and (10.541), one concludes that

0

0; based on Eqs.

ζ, ξ minimum

lim f x,y > f x, y

ρ

0 when ρ

ζ, ξ

∂2 f >0 ∂x2 ∂2 f ∂2 f ∂x2 ∂x∂y ∂2 f ∂2 f ∂x∂y ∂y2

10 542 >0

Differentials, Derivatives, and Partial Derivatives

for every (close) position of (x,y) relative to (ζ,ξ) – so (ζ,ξ) corresponds to a minimum. β11 β12 When > 0 but β11 < 0, then the first two terms in parenthesis in Eq. (10.540) β12 β22 are negative, meaning that sufficiently small values of ρ lead to

lim f x, y − f x, y

ρ 0

ζ, ξ

β11 cosθ + β12 sin θ β11

1 = lim ρ2 2ρ 0

2

sin2 θ β11 β12 + + 2β0 ρ β11 β12 β22

0

10 543 for either sign of β0 ρ in view of its being infinitesimal – which is equivalent to ζ , ξ maximum

lim f x, y < f x, y

ρ

0

ζ, ξ

2

∂ f 0

for every location in the neighborhood of (ζ,ξ); this point accordingly represents a maximum. β11 β12 β cosθ + β12 sin θ 2 sin2 θ β11 β12 If and < 0, then the signs of 11 will be β11 β12 β22 β11 β12 β22 opposite, so their sum may be positive or negative depending on the direction characterized by angle θ – knowing that the contribution of 2β0ρ will, once again, be negligible for points sufficiently close to (ζ,ξ); a saddle point arises in this case. β11 β12 For = 0 and β11 0, Eq. (10.540) becomes β12 β22 f x,y − f x, y

1 = ρ2 2 ζ, ξ

β11 cos θ + β12 sin θ β11

2

+ 2β0 ρ

10 545

– which does not provide sufficient information for a final conclusion; for instance, if β11cos θ + β12sin θ = 0, i.e. when tan θ is given by −β11/β12, then the sign of f {x, y} − f{x, y}|ζ,ξ depends solely on the sign of β0ρ no matter its magnitude – while other directions witness disparate patterns. Hence, a larger number of terms of Taylor’s expansion will be required, further to those included in Eq. (10.527). 10.5.2

Subjected to Constraints

The approach here essentially mimics that followed for univariate functions – although generalized to more variables. Consider, in this regard, a function f of N variables, x1, x2, …, xN, viz. f

f x1 , x1 ,…, xN ,

10 546

371

372

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

subjected to M constraints, g1, g2, …, gM, of the form gj x1 , x2 , …, xN = 0; j = 1, 2, …, M;

10 547

Lagrange’s multiplier method may then be extended, via definition of auxiliary functions M

ϕ x1 , x2 , …, xN , λ1 , λ2 , …, λM

f x1 , x2 , …, xN −

λj gj x1 , x2 , …, xN

10 548

j=1

– in much the same way Eq. (10.512) was put forward. The local extremum will be found by simultaneously solving the set of N (algebraic) equations given by ∂ϕ ∂xi

= 0; i = 1, 2,…, N

10 549

xk i, λ

– where subscript λ denotes all λ1, λ2, …, λM remaining fixed; coupled to the set of M (algebraic) equations ∂ϕ ∂λj

= 0; j = 1, 2,…, M x, λk

10 550

j

– where subscript x means that all x1, x2, …, xN are kept constant. Therefore, a total of N + M equations will have to be solved – versus just 2 + 1 in the univariate case subjected to a single constraint, described by Eqs. (10.513)–(10.515).

373

11 Integrals

Integration may be regarded as the inverse operation to differentiation; function F{x} is accordingly said to be the (indefinite) integral of f {x}, viz. F x

f x dx

dF dx

,

11 1

=f

provided that dF/dx equals f {x}. Function f {x} is called integrand, and f {x} is integrable when F{x} exists. If the focus of one’s study is the area, Af,[a,b], below a curve of equation y = f {x} – lower bounded by the horizontal axis, described by y = 0, left bounded by the straight line of equation x = a, and right bounded by the straight line of equation x = b, then a constant, termed definite integral, will be at stake – such that b

Af , a, b ;

f x dx

11 2

a b

as will be seen, a f x dx is normally calculated via a linear combination of the values taken by F{x}, as per Eq. (11.1), when x = a, i.e. F{x}|a, and when x = b, i.e. F{x}|b. More than one independent variable may be of interest, namely, toward calculation of the volume of a solid, e.g. h x

b

g x

a

f x, y dx dy

Vf , g , h, a, b ,

11 3

encompassing a bivariate function, z = f{x,y}, developing along a direction normal to the x0y plane, and defined within a (fully convex) surface domain on the x0y plane such that abscissa x varies between a and b > a – and, at each x, ordinate y varies between g{x} and h{x} > g{x}; this is illustrated in Fig. 11.1. Therefore, three functions – i.e. the integrand function, f {x,y}, as well as lower- and upper-boundary function, g{x} and h{x}, respectively, need to be known to fully define a solid with associated volume Vf,g,h,[a,b]; note that the definition of g{x} may be splitted as gx ≤ g −1 A x for a ≤ x ≤ g−1{A} and gx ≥ g − 1 A x for g−1{A} ≤ x ≤ b, and h{x} may likewise be defined as hx ≤ h − 1 B x for a ≤ x ≤ h−1{B} and as hx ≥ h − 1 B x for h−1{B} ≤ x ≤ b.

Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

B g{a} = h{a} h{x} g{x}

y

374

g{b} = h{b} A

0

a h–1{B}

g–1{A} b x

Figure 11.1 Graphical representation of functions g{x} and h{x} on the x0y plane, both departing from abscissa a and ordinate g{a} = h{a} and attaining abscissa b and ordinate g{b} = h{b}, or departing from abscissa g−1{A} and ordinate A and attaining abscissa h−1{B} and ordinate B – serving as lower and upper boundaries, respectively, for the integration region.

As will be justified below, the order of integration may (for convenience) be exchanged between x and y, yet the domain of integration must remain the same; since said domain is fully convex, variable y now varies between A and B > A, so the interval of integration should now be splitted three ways – according to g b =h b

g −1

y

g −1

y

x ≥ g −1 A x ≤ g −1 A

A B

h −1

y

x ≥ h−1 B

+ g a =h a

g a =h a

h −1

y

g b =h b

g −1

y

f x, y dxdy +

h −1

y

x ≤ h−1 B

f x, y dxdy

V

x ≥ h−1 B

x ≤ g −1 A

f , g −1, h−1,

A, B

f x, y dxdy ;

11 4

= Vf , g , h, a, b

this accomodates the fact that g−1{y} and h−1{y} are not true functions, but should rather −1 be handled as gx−1 ≤ g −1 A and gx ≥ g −1 A as separate functions within A ≤ y ≤ g{b}, and similarly hx−≤1 h −1

B

and hx−≥1 h −1

B

as separate functions within h{a} ≤ y ≤ B.

11.1

Univariate Integral

11.1.1

Indefinite Integral

11.1.1.1 Definition

As materialized in Eq. (11.1), the (indefinite) integral is the inverse function of the derivative, i.e.

Integrals

df x dx = df x = f x , dx

11 5

which may be restated as dF x dx

d f x dx = f x dx

11 6

dF = f ; either Eq. (11.5) or Eq. (11.6) justifies why dx composition of integration and differentiation of a given function retrieves the original function. Unlike the derivative of an elementary function – which normally exists and is another elementary function (as long as the function under scrutiny is continuous), the same cannot be stated about an integral; there are elementary functions that do not 2 hold an integral expressible as a finite combination of elementary functions (e.g. e − x ) – and furthermore it cannot be told a priori whether a given function can be integrated. One also realizes that the uniqueness of the derivative does not extend to an integral. Consider, in this regard, that F1{x} and F2{x} represent two distinct (indefinite) integrals of f {x} – both continuous and differentiable functions; according to Eq. (11.6), this means that with the aid of Eq. (11.1) and hypothesis

dF 1 x =f x dx as well as

11 7

dF 2 x =f x dx

11 8

Ordered subtraction of Eq. (11.7) from Eq. (11.8) produces dF 2 x dF 1 x − = f x − f x = 0, dx dx

11 9

where the rule of differentiation of an (algebraic) sum as per Eq. (10.106) allows reformulation to d F2 x −F1 x dx

dΦ x =0 dx

11 10

Lagrange’s theorem may be retrieved here because its conditions of validity are met – namely, Φ{x} ≡ F2{x} − F1{x} is a continuous function within interval [a,b], since both F1{x} and F2{x} are so separately, within the same interval, as per Eq (9.139); according to Eq. (10.274), for an arbitrary x comprised between a and b (and thus defining an interval ]a,x[), one can always find a ξ ]a,x[ such that dΦ x dx

= x=ξ

Φ x −Φ a x −a

11 11

However, Eq. (11.10) enforces a nil value for dΦ{x}/dx, irrespective of the actual value taken by x – so Eq. (11.11) becomes Φ x −Φ a = 0; x−a

11 12

375

376

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Eq. (11.12) breaks down to Φ x −Φ a = 0

11 13

after multiplying both sides by x − a

0, or else

Φ x =Φ a

11 14

after isolating Φ{x}. Equation (11.14) indicates that Φ{x} keeps, along ]a,b[, the value taken at x = a, since x was hypothesized as a generic value lying between a and b; in other words, Φ{x} must be a constant function, say, equal to k. One may accordingly state that F2 x − F1 x = k,

11 15

in view of the definition of Φ as per Eq. (11.10) – so one will, in general, conclude that F2 x = F1 x + k;

11 16

hence, there is one degree of freedom associated with an indefinite integral or, equivalently, there are an infinite number of other integrals for any given function if one integral can be found. A list of the most common integrals is provided in Table 11.1. Confirmation of all integrals tabulated therein, with regard to the corresponding function f {x}, may be obtained via application of Eq. (11.6) to the right column, at the expense of the rules of differentiation presented previously; note the existence of an arbitrary constant, k, as part of all integrals – consistent with Eq. (11.16). Recalling the definition of differential as per Eq. (10.1), one may redo Eq. (11.6) to d f x dx =

d dx

f x dx dx = f x dx

Table 11.1 List of elementary indefinite integrals. f {x}

f {x}dx

xn, n

1

1 x ex

xn + 1 +k n+1 ln x + k ex + k

e−x

− e−x + k − cos x + k

sin x cosec x

− cotan x

cos x

sin x + k

2

2

sec x

tan x + k

sec x tan x

sec x + k

1 1−x2 1 1 + x2

sin−1 x + k − cos−1 x + k tan−1 x + k

11 17

Integrals

– which may be readily obtained via multiplication of both sides by dx; hence, the differential of an indefinite integral coincides with the function in its kernel multiplied by the differential of its independent variable. Moreover, after revisiting Eq. (11.6) as dF =f, dx

11 18

one promptly obtains dF = f dx

11 19

after having again multiplied both sides by dx; integration of both sides, i.e. dF = f dx,

11 20

will eventually read dF = F

11 21

with the aid of Eq. (11.1) – so the integral of the differential of a function is but the said function, in agreement with Eq. (11.5) as well. 11.1.1.2 Rules of Integration

To calculate a nonelementary integral, one may in general resort to integration by decomposition, by parts, or by change of variable; the former is consubstantiated in a1 f1 x + a2 f2 x dx = a1 f1 x dx + a2 f2 x dx,

11 22

i.e. the integral of a linear combination of functions equals the linear combination of the integrals of such functions. In fact, the outcome of differentiating both sides of Eq. (11.22), viz. d dx

a1 f1 x + a2 f2 x dx =

d a1 f1 x dx + a2 f2 x dx , dx

11 23

may be rewritten as a 1 f1 x + a 2 f2 x = a 1

d d f1 dx + a2 f2 dx dx dx

11 24

following combination with Eqs. (10.106) and (10.120), with the aid of Eq. (11.6); a second application of Eq. (11.6) unfolds a 1 f1 x + a 2 f2 x = a 1 f1 x + a 2 f2 x

11 25

that stands as universal condition, so Eq. (11.22) as departing relationship is always valid.

377

378

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Table 11.2 List of indefinite integrals obtained by rule of decomposition. f {x}

f {x}dx

cosh x

sinh x + k

sinh x

cosh x + k

As a consequence of Eq. (11.22), one may calculate the integral of hyperbolic cosine as per Eq. (2.473) – according to ex + e − x 1 x 1 −x 1 1 e + e dx = e x dx + e − x dx dx = 11 26 2 2 2 2 2 1 – after setting a1 a2 , f1 ≡ ex, and f2 ≡ e−x; upon inspection of Table 11.1, and recal2 ling Eq. (2.472), one obtains cosh x dx

cosh x dx =

ex − e − x 2

sinh x

11 27

– as depicted in Table 11.2, together with the standard additive (arbitrary) constant. A similar example pertains to hyperbolic sine as per Eq. (2.472), which integrates to e x −e −x 1 x 1 −x 1 1 e − e dx = e x dx − e −x dx dx = 11 28 sinh x dx 2 2 2 2 2 1 1 and a2 − , besides again f1 ≡ ex and again at the expense of Eq. (11.22) – where a1 2 2 f2 ≡ e−x; the very same Table 11.1 supports 1 1 e x + e −x sinh x dx = e x − − e − x = 2 2 2

cosh x

11 29

– as also tabulated in Table 11.2. On the other hand, the rule of differentiation of a product, applied to functions f1{x} and f2{x}, has it that d df df f1 f2 = f2 1 + f1 2 dx dx dx

11 30

in full agreement with Eq. (10.119) – where integration of both sides, after multiplication thereof by dx, unfolds d f1 f2 dx = d f1 f2 = dx

f2 df1 + f1 df2 =

f2

df1 df + f1 2 dx; dx dx

11 31

Eq. (11.21) applied to the left-hand side, coupled with Eq. (11.22) applied to the right-hand side transform Eq. (11.31) to f1 f2 = f2 df1 + f1 df2

11 32

or else f1 df 2 = f1 f2 − f2 df1

11 33

Integrals

after isolation of

f1df2. For function f1{x}, Eq. (10.1) supports

df1 dx, dx so Eq. (11.33) becomes

11 34

df1 =

f1 df2 = f1 f2 − f2

df1 dx; dx

11 35

if (dummy) variable f2 is replaced by F2, then Eq. (11.35) turns to f1 dF 2 = f1 F2 − F2

df 1 dx dx

11 36

On the other hand, Eq. (11.6) has it that dF 2 = f2 dx

11 37

– or, after multiplication of both sides by dx, dF 2 = f2 dx;

11 38

combination with Eq. (11.38) converts Eq. (11.36) finally to f1 x f2 x dx = f1 x F2 x − F2 x

d f1 x dx; dx

11 39

in other words, the integral of the product of two functions, f1 and f2, can be calculated as the product of the first, f1, by the integral of the second, F2, with further negative correction via the integral of the product of F2 by the derivative of f1, i.e. df1/dx. The aforementioned method – known as integration by parts since it encompasses integration first of f2 to F2, and then of the product of F2 itself by df1/dx, may be used to ascertain the integral of the natural logarithm, viz. 1 ln x dx = 1 ln x dx = x ln x − x dx = x ln x − dx = x ln x− x x

11 40

after setting f1 ≡ ln x and f2 ≡ 1 in Eq. (11.39), recalling Eq. (10.40), and retrieving the first entry of Table 11.1 with n = 0; one may further transform Eq. (11.40) to ln x dx = x ln x− 1

11 41

upon factoring out of x – as also depicted in Table 11.3. Table 11.3 List of indefinite integrals obtained by rule of integration by parts. f {x}

f {x}dx

ln x

x (ln x − 1) + k

tan−1x

1 x tan − 1 x − ln 1 + x2 + k 2

379

380

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Another example pertains to inverse tangent – relying again on preliminary multiplication by unity, viz. tan−1 x dx = 1 tan− 1 x dx = x tan−1 x − x

1 dx, 1 + x2

11 42

where Eq. (11.39) was applied with f1 ≡ tan−1x and f2 ≡ 1, as well as Eq. (10.192) to calculate dtan−1 x/dx; recalling the concept of differential as per Eq. (10.1), one finds that d 1 + x2 dx = 2xdx, dx thus allowing Eq. (11.41) be redone as d 1 + x2 =

tan−1 x dx = xtan− 1 x−

1 2xdx 1 d 1 + x2 −1 = x tan x − 2 1 + x2 2 1 + x2

11 43

11 44

Since 1 + x2 appears as the new variable under the integral sign in Eq. (11.44), one may resort to the second entry in Table 11.1 to obtain 1 tan−1 x dx = x tan− 1 x− ln 1 + x2 2

11 45

by analogy – as also included in Table 11.3. Selection of f2 normally lies on the factor function in the kernel that is easier to integrate; this leaves the other factor function, f1, to be differentiated – usually an easier, and almost always feasible process. The third – and most powerful, method of integration is via change of variable; toward this goal, consider ϕ ξ

x

11 46

where ξ denotes a new variable of integration and ϕ denotes a function that is continuous and differentiable, besides being invertible. Recalling Eq. (10.1) again, one may rewrite Eq. (11.46) as dx

dϕ dξ dξ

11 47

in differential form – so the integral of f {x} may look like f x dx =

f ϕ ξ

dϕ ξ dξ dξ

11 48

at the expense of Eqs. (11.46) and (11.47); cancellation of dξ between numerator and denominator of the kernel finally unfolds f x dx =

f ϕ dϕ,

11 49

so the integration variable has been swapped from x to ϕ{ξ} – which is a useful deed, as long as the integral of f {ϕ} can be calculated (and more easily than the integral of f {x} in the first place). Once F{ϕ{ξ}} ≡ f {ϕ}dϕ has been obtained the original functionality on x can be retrieved via Eq. (11.46), i.e. F{x} ≡ F{ϕ−1{ξ}}; conversion between Eqs. (11.44) and (11.45) already took advantage of this rule, with ϕ−1{ξ} ≡ 1 + x2. To confirm validity of Eq. (11.48), or Eq. (11.49) for that matter, one may differentiate its left-hand side to get

Integrals

d f x dx = f x dx

11 50

with the aid of Eq. (11.6); the derivative at stake may be assessed via the chain differentiation rule as per Eq. (10.205), viz. d f ϕ ξ dx

dϕ dξ d dξ = f ϕ ξ dξ dx dξ

dϕ dξ, dξ

11 51

along with Eq. (11.48) – where a second application of Eq. (11.6) unfolds d f ϕ ξ dx

dϕ dξ dξ = f ϕ ξ dξ dx

dϕ dξ

11 52

After recalling Eq. (10.247), one may reformulate Eq. (11.52) to d f ϕ ξ dx

dϕ 1 f ϕ ξ dξ = dx dξ dξ

dϕ , dξ

11 53

where coincidence of x with ϕ as per Eq. (11.46) – and thus of

dx dϕ with , permit simdξ dξ

plification of Eq. (11.53) to d f ϕ ξ dx

dϕ 1 f ϕ ξ dξ = dϕ dξ dξ

dϕ =f ϕ ξ dξ

;

11 54

Eq. (11.46) may be invoked once more to get d f ϕ ξ dx

dϕ dξ = f x dξ

11 55

Since the right-hand sides of Eqs. (11.50) and (11.55) coincide, their left-hand sides dϕ must also coincide – so f {x} dx and f ϕ ξ dξ, or f ϕ dϕ for that matter dξ should differ by a constant (as expected), see Eq. (11.16). This method finds application in the integration of tangent, viz. tan x dx

sin x dx, cosx

11 56

with the aid of Eq. (2.299), where a change of variable to ξ

cos x

11 57

– with reciprocal playing the role of ϕ, permits its differential to be calculated as dξ = − sin x dx,

11 58

in agreement with Eqs. (10.1) and (10.48); insertion of Eqs. (11.57) and (11.58) allows transformation of Eq. (11.56) to tan x dx = −

dξ , ξ

11 59

381

382

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Table 11.4 List of indefinite integrals obtained by rule of integration by change of variable. f {x}

f {x}dx

tan x

ln |sec x| + k

cotan x

ln |sin x| + k

sec x

ln |sec x + tan x| + k

where the second entry of Table 11.1 may be taken advantage of to get tan x dx = − ln ξ = − ln cosx ,

11 60

with the original notation recovered via Eq. (11.57). Further algebraic manipulation is possible through Eqs. (2.26) and (2.309), which generates tan x dx = ln

1 cos x

ln secx ,

11 61

as included specifically in Table 11.4. A similar application pertains to cotangent, viz. cos x dx, sin x

cotan x dx

11 62

in view of Eq. (2.304); using Eqs. (11.57) and (11.58) as template, one may suggest ξ

sin x

11 63

that implies dξ = cosx dx,

11 64

based on both Eqs. (10.1) and (10.44), with ϕ representing the reciprocal. Combination with Eqs. (11.63) and (11.64) supports transformation of Eq. (11.62) to cotan x dx

dξ = ln ξ , ξ

11 65

where the second entry to Table 11.1 has been invoked – to finally produce cotan x dx = ln sin x

11 66

with the aid of Eq. (11.63); for the sake of completeness, Eq. (11.66) is also included in Table 11.4, together with a universal (integration) constant. A third example refers to the integral of secant – which, for convenience, may be rewritten as sec x dx =

sec x sec x + tan x dx = sec x + tan x

sec2 x + tan x sec x dx , sec x + tan x

11 67

Integrals

following previous multiplication of numerator and denominator of the kernel by sec x + tan x; if ξ is now defined as ξ

secx + tan x,

11 68

the associated differential will read dξ = sec x tan x + sec2 x dx

11 69

as per Eqs. (10.1), (10.143), and (10.153). Combination with Eqs. (11.68) and (11.69) simplifies Eq. (11.67) to sec x dx =

dξ , ξ

11 70

with the second entry in Table 11.1 supporting sec x dx = ln ξ ;

11 71

insertion of Eq. (11.68) finally gives sec x dx = ln sec x + tan x ,

11 72

as also tabulated in Table 11.4. A final method of integration bearing wide usefulness applies specifically to (regular) rational fractions, of the form labeled as Eq. (2.187), viz. Pn x dx = Qn −m x dx+ Pm x

R 1

Al, p + 1 x− rl p − sl + 1

p− sl + 1

+ k; p = 0, 1, …, sl −2,

11 76

383

384

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

corresponding to a multiple (real) root – whereas the case associated with p = sl − 1 degenerates to Eq. (11.75). The third situation of practical interest encompasses (conjugate) complex roots; since they appear in pairs (as Pm{x} was implicitly assumed to bear real coefficients only), they are usually combined as done in Eq. (2.206) – to eventually yield the result labeled as Eq. (2.211). The associated partial integral will then look like Al, 1 Al, 2 + x− α + ιβ x− α − ιβ

dx =

Bl , 1 x + Bl , 2 dx x2 + Bl, 3 x + Bl, 4

11 77

– where Bl,1/2 may be factored out, and then Bl,3 added and subtracted in the numerator of the kernel to yield 2Bl, 2 2x + Al , 1 Al , 2 Bl , 1 Bl , 1 + dx = dx x − α + ιβ x − α −ιβ 2 x2 + Bl , 3 x + Bl , 4 11 78 2Bl, 2 2x + Bl, 3 + − Bl , 3 Bl , 1 Bl , 1 dx = 2 x2 + Bl , 3 x + Bl , 4 Equation (11.22) allows splitting of the last integral in Eq. (11.78) as Al , 1 Al , 2 + x − α + ιβ x − α −ιβ

dx =

Bl , 1 2x + Bl, 3 dx 2 x2 + Bl , 3 x + Bl , 4 Bl, 1 2Bl, 2 −Bl, 3 + 2 Bl , 1

dx x2 + Bl , 3 x + Bl , 4

,

(11.79) whereas the denominator of the last integral may be redone to Al, 1 Al, 2 + x− α + ιβ x− α − ιβ

dx =

Bl, 1 2x + Bl, 3 dx 2 x2 + Bl, 3 x + Bl, 4 11 80

Bl, 1 Bl, 3 + Bl , 2 − 2

dx x2 + Bl , 3 x +

Bl , 3 2 Bl , 3 2 + Bl, 4 − 4 4

via addition and subtraction of Bl, 3 2 /4 thereto (while Bl,1/2 was meanwhile factored in); Newton’s binomial formula, i.e. Eq. (2.237), may then be retrieved to rewrite Eq. (11.80) as Al , 1 Al , 2 + x− α + ιβ x− α −ιβ

+ Bl, 2 −

Bl , 1 Bl , 3 2

dx =

Bl , 1 2x + Bl, 3 dx 2 x2 + Bl , 3 x + Bl , 4 ,

dx Bl , 3 x+ 2

2

+

Bl, 3 2 Bl , 4 − 4

2

11 81

Integrals

Bl, 4 −

where a final division of both numerator and denominator twice by Al , 1 Al , 2 + x − α + ιβ x − α −ιβ

dx =

Bl, 3 2 yields 4

Bl , 1 2x + Bl, 3 dx 2 x2 + Bl , 3 x + Bl , 4 dx Bl, 1 Bl, 3 2 Bl, 3 2 Bl , 4 − 4

Bl , 2 − +

Bl , 4 −

Bl, 3 2 4 2

Bl, 3 2 +1 Bl , 3 2 Bl, 4 − 4 x+

(11.82) Two changes of variables are now in order, viz. ξ

x2 + Bl, 3 x + Bl, 4

11 83

germane to the first integral in Eq. (11.82), and Bl , 3 2 Bl, 3 2 Bl , 4 − 4 x+

ζ

11 84

relevant for the second one; after taking differentials of both sides, Eq. (11.83) becomes 11 85

dξ = 2x + Bl, 3 dx consistent with Eq. (10.1), while Eq. (11.84) unfolds dζ =

dx

11 86

Bl, 3 2 Bl, 4 − 4

Insertion of Eqs. (11.83)–(11.86) permits simplification of Eq. (11.82) to Al , 1 Al, 2 + x− α + ιβ x− α − ιβ

dx =

Bl, 1 dξ 2Bl, 2 − Bl, 1 Bl, 3 dζ + , 2 2 ξ 4Bl, 4 − Bl, 3 2 ζ + 1

11 87

– with simultaneous multiplication of both numerator and denominator of the factor prior to the integral in the right-hand side by 2 (or 4 for that matter); the second and last entries in Table 11.1 may then be retrieved to write Al , 1 Al, 2 + x− α + ιβ x− α − ιβ

dx =

Bl, 1 2Bl, 2 − Bl, 1 Bl, 3 tan−1 ζ ln ξ + 2 4Bl, 4 − Bl, 3 2

11 88

The original notation can finally be recovered with the aid of Eqs. (11.83) and (11.84), viz.

385

386

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Al , 1 Al , 2 + x− α + ιβ x− α −ιβ +

2Bl, 2 − Bl, 1 Bl, 3

Bl , 1 ln x2 + Bl, 3 x + Bl, 4 2

2x + Bl, 3

−1

tan

4Bl, 4 − Bl, 3 2

dx =

4Bl, 4 −Bl, 3 2

,

11 89

+k

where both numerator and denominator of the argument of the inverse tangent were multiplied by 2 for convenience; remember that Bl,1, Bl,2, Bl,3, and Bl,4 are related to the original Al,1, Al,2, α, and β in Eq. (11.77) via Eqs. (2.212)–(2.215). Despite the above (usually simpler) approach, the case of a pair of complex (conjugate, and thus distinct) roots may also be handled via Eq. (11.75) – knowing that rl will be of the form α ± ιβ, and taking advantage of Euler’s form of complex numbers, see Eq. (2.557).

11.1.2

Definite Integral

11.1.2.1 Definition

According to Georg F. B. Riemann, a German mathematician of the nineteenth century, a definite integral is defined as N

b

f x dx a

lim



N

f a+i i=1

b −a b − a ; N N

11 90

a graphical interpretation is provided in Fig. 11.2. One consequently realizes that the definite integral of function f {x}, taken between a and b, is but the limit of the summation of areas of adjacent rectangles – each with base characterized by length (b − a)/N and height given by f {a + i(b − a)/N} with i = 1,2,..., N, when N grows unbounded. Said limiting geometrical figure is also known as trapezoid – upper defined by f {x} and lower defined by a straight segment lying on the horizontal axis with length b − a, besides vertical segments serving as left and right sides. As long as N ∞, the relative width of said rectangles is immaterial – because it will become infinitesimally small in every case; hence, Eq. (11.90) may, in general, be coined as N

b

f x dx a

f xi xi −xi−1

lim

N



N

xi − xi −1 = b− a

i=1

,

11 91

i=1

where the constraint of the sum of (xi − xi−1)’s remaining finite was made explicit. An immediate consequence of Eq. (11.90) pertains to a constant function, say, f {x} = c – which simplifies the summation therein to N

f a+i i=1

b −a b −a = N N

N

c i=1

b−a b−a N b−a =c N = c b −a , 1=c N N i=1 N

11 92

where constant c(b − a)/N was meanwhile factored out, and the definition of multiplication was taken into account to get rid of the summation; one may now apply limits to both sides of Eq. (11.92) to obtain

Integrals

f{b} = f{xN} f{xN–1} f{xN–2} f{xN–3}

y

f{xN–4} ... f{x} f{x7} f{x4}

...

a = x0 x1 x2 x3 x4 x5 x6 x7

0

xN–5 xN–4 xN–3 xN–2 xN–1 b = xN

f{a} = f{x0}

x Figure 11.2 Graphical representation of continuous function, f {x}, within interval [a,b], subdivided in N subintervals of identical amplitude, (b − a)/N, defined by abscissae x0 = a, x1, x2, …, xN−1, xN = b, and corresponding to rectangles of height given by f {x1}, f {x2}, …, f {xN−1}, f {xN} as ordinates, respectively.

N

lim



N

f a+i i=1

b − a b− a = lim c b − a , N ∞ N N

11 93

or else b

c dx = c b − a ,

11 94

a

with the aid of Eqs. (9.30) and (11.90). For a generic function f {x}, one also realizes that N

b

c f x dx a

lim

N



cf a+i i=1

b −a b − a , N N

11 95

again at the expense of Eq. (11.90); the constancy of c allows it being factored out the summation – which, together with Eq. (9.92), lead to N

b

c f x dx = lim c a

N



f a+i i=1 N

= c lim N



i=1

b −a b − a N N

b −a b − a f a+i N N

11 96

387

388

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Equation (11.90) may be invoked once more to conclude b

b

c f x dx = c f x dx, a

11 97

a

i.e. a constant may simply be taken off a definite integral. Using a similar strategy, one can address a sum of functions as N

b

f x + g x dx

lim

N

a



f a+i i=1

b−a b−a +g a+i N N

b−a , N

11 98

using Eq. (11.90) as template; the distributive property of multiplication of scalars and the associative property of addition of scalars allow transformation of Eq. (11.98) to N

b

f x + g x dx = lim N

a



f a+i i=1 N

= lim N



i=1

b− a b −a b −a b −a +g a+i N N N N

b −a b −a N b −a b − a + f a+i g a+i N N N N i=1

,

11 99 whereas Eq. (9.73) permits further transformation to N

b

f x + g x dx = lim N

a



f a+i i=1

N b −a b −a b −a b − a + lim g a+i N ∞ N N N N i=1

11 100 In view of Eq. (11.90), one eventually finds b

b

f x + g x dx = a

b

f x dx + a

11 101

g x dx; a

in other words, the integral and sum operators may be interchanged – owing to their intrinsic linearity. Combination of Eqs. (11.97) and (11.101) gives rise to b

b

b

c1 f1 x + c2 f2 x dx = c1 f1 x dx + c2 f2 x dx a

a

11 102

a

– which represents the definite integral counterpart of Eq. (11.22), valid for indefinite integrals. When the negative of a function is at stake, one may redo Eq. (11.90) to b a

N

− f x dx

−f a + i i=1 N

b −a N

b−a = f a+i N i=1

b −a N N b−a b −a a −b − = f a+i N N N i=1

,

11 103

Integrals

where the explicit minus sign initially preceding f was lumped with b − a of the increment to produce a − b; Eqs. (11.90) and (11.97) with c = −1 give rise to −

b

a

f x dx =

11 104

f x dx,

a

b

so reversal of the limits of integration is equivalent to taking the negative of the original integral. Suppose now that the amplitude of the integration interval, b − a, is partitioned as the sum of b − c with c − a – with a < c < b, such that a + α b − a = a + αN

c

b −a , N

11 105

and thus b = c + 1 −α b − a = c + 1 −α N

b− a ; N

11 106

where multiplication and division by N were meanwhile performed – with 0 < α < 1 denoting a real number. Equation (11.105) may be algebraically rearranged to α b − a = c− a

11 107

– while Eq. (11.106) may similarly produce 1 − α b −a = b −c,

11 108

or the expected result b = a + α b −a + 1 − α b − a = a + α + 1 − α

b − a = a + b −a

11 109

after ordered addition of Eqs. (11.107) and (11.108), followed by factoring out of b – a and cancellation of symmetrical terms. In view of the interval splitting proposed, Eq. (11.90) may be rewritten as αN

b

f x dx a

lim

N

f a+i



i=1

N b −a b −a b −a b −a , + f a+i N N N N i = αN + 1

11 110 corresponding to splitting of the summation itself (as long as αN represents an integer); exchangeability between addition and limit operators, see Eq. (9.73), supports further transformation to αN

b

f x dx = lim a

N



f a+i i=1

N b −a b − a b−a b−a + lim f a+i N ∞ N N N N i = αN + 1

11 111 Upon multiplication and division of both (b − a)/N terms, under the first summation, by α, and likewise of the outer (b − a)/N term under the second summation by 1 − α, one gets

389

390

Mathematics for Enzyme Reaction Kinetics and Reactor Performance αN

b

f x dx = lim N

a



f a+i i=1

α b −a αN

N α b −a b −a + lim f a+i N ∞ N αN i = αN + 1

1 − α b− a 1−α N 11 112

from Eq. (11.111); insertion of Eqs. (11.107) and (11.108) then permits simplification to αN

b

f x dx = lim N

a



f a+i i=1

N −αN c− a c −a b−a + lim f a+i αN αN N ∞ i −αN = 1 N

b−c 1 −α N 11 113

One may, in parallel, revisit Eq. (11.108) as b− a =

b−c 1−α

11 114

following division of both sides by 1 − α, where multiplication of both sides by j/N leads to j j b− c b−a = ; N N 1 −α

11 115

further addition of a + α(b − a) to both sides unfolds a + α b −a + j

b− a b− c = a + α b−a + j , N 1−α N

11 116

with Eq. (11.105) allowing simplification of the right-hand side to a + αN

b− a b −a b−c +j =c+j N N 1 −α N

11 117

– while α(b − a) in the left-hand side was meanwhile replaced by αN(b − a)/N. After factoring (b − a)/N out in the left-hand side, Eq. (11.117) becomes a + j + αN

b−a b− c , =c+j N 1−α N

11 118

so definition of i as j + αN

i

11 119

arises logically – and allows conversion of Eq. (11.118) to a+i

b− a b −c =c+j ; N 1−α N

11 120

combination of Eqs. (11.113), (11.119), and (11.120) unfolds αN

b

f x dx = lim a

N



i=1

c− a c− a + lim f a+i αN αN N ∞

1− α N

f c+j j=1

b−c 1 −α N

b−c 1−α N 11 121

Since αN

∞, and likewise (1 − α)N

∞ when N

∞, Eq. (11.121) can also appear as

Integrals αN

b

f x dx = lim αN

a



f a+i i=1

c −a c− a + αN αN

1−α N

lim

1− α N



f c+j j=1

b −c 1−α N

b −c ; 1− α N 11 122

on the other hand, αN and (1 − α)N are dummy variables, so they may be exchanged with P ≡ αN and Q ≡ (1 − α)N, respectively, thus leaving Eq. (11.122) as Q

P

b

f x dx = lim



P

a

f a+i i=1

c −a c− a b −c b − c + lim f c+j Q ∞ P P Q Q j=1

11 123

– which, according to Eq. (11.90), is equivalent to stating b

c

f x dx = a

b

f x dx + a

11 124

f x dx c

A similar derivation is possible when a < b < c, in which case Eq. (11.124) may be revisited as c

b

f x dx = a

c

f x dx + a

11 125

f x dx b

because b is now located between a and c < b – where isolation of the first term in the right-hand side produces b

c

f x dx = a

f x dx −

a

c

11 126

f x dx; b

Eq. (11.104) may then be used to transform Eq. (11.126) to Eq. (11.124) – thus extending its validity to c > b. By the same token, Eq. (11.124) guarantees b

a

f x dx = c

b

f x dx + c

f x dx

11 127

f x dx

11 128

a

when c < a < b, or else b

b

f x dx = a

c

upon isolation of

b a



f x dx −

a c

f x dx. In view of

a

c

f x dx = c

11 129

f x dx a

as per Eq. (11.104), one concludes on the equivalence between Eqs. (11.128) and (11.124). Taking all above cases into account, one realizes that a definite integral evaluated between a and b can be splitted as two definite integrals, using c as intermediate boundary – irrespective of the location of c relative to a and b. b One may now proceed to addition of a f x dx to its negative to get b a

f x dx + −

b

b

f x dx = a

a

f x dx −

b

f x dx = 0; a

11 130

391

392

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

on the other hand, Eq. (11.104) has it that b

b

f x dx + −

b

f x dx =

a

a

a

f x dx + a

11 131

f x dx, b

which may be rewritten as b

b

f x dx + −

a

f x dx =

a

a

11 132

f x dx a

with the aid of Eq. (11.124). Elimination of

b a

f x dx + −

b a

f x dx

between

Eqs. (11.130) and (11.132) leaves merely a

f x dx = 0

11 133

a

– meaning that an integration interval with nil amplitude delivers a nil value for the associated definite integral; this very same conclusion would be reached if (b − a)/N in Eq. (11.90) were replaced by 0. Another property of relevance exhibited by definite integrals reads b

f x dx ≤

a

b

;

g x dx

11 134

f x ≤g x

a

in fact, if f {x} does not exceed g{x}, then at every point xi within [a,b] one realizes that f {xi} ≤ g{xi}, so multiplication of both sides by (b − a)/N > 0 unfolds b −a b− a ≤ g xi N N

f xi

11 135

If one further sets a+i

xi

b− a , N

11 136

then Eq. (11.135) becomes f a+i

b −a b −a b− a b −a ≤g a+i ; i = 1,2, …,N; N N N N

11 137

following ordered addition of Eq. (11.137) from i = 1 up to i = N, one obtains N

f a+i i=1

b−a b−a ≤ N N

N

g a+i i=1

b− a b −a , N N

11 138

whereas a corollary of Eq. (9.121) supports eventual conversion to N

f a+i

lim

N



i=1

N b −a b − a b −a b − a ≤ lim g a+i N ∞ N N N N i=1

– coincident with Eq. (11.134), after recalling Eq. (11.90).

11 139

Integrals

11.1.2.2 Basic Theorems of Integral Calculus 11.1.2.2.1 Mean Value Theorem for Integrals

If f {x} is a continuous function between a and b > a, then there is at least a c, located between a and b, such that b

f x dx = b− a f x

a, b

c

a

x=c

11 140

holds true; Eq. (11.140) is usually known as mean value theorem for integrals. In attempts to prove the above theorem, one may resort to Weierstrass’ theorem as per Eqs. (9.184) and (9.185) – which guarantees existence of a minimum m and a maximum M for f {x} in any interval [a,b] where it is defined, i.e. m ≤ f x ≤ M,

11 141

since the said function is continuous (by hypothesis); after applying Eq. (11.134) twice to Eq. (11.141), one obtains b

b

mdx ≤

a

f x dx ≤

a

b

11 142

Mdx, a

whereas Eq. (11.94) supports transformation to b

m b −a ≤

f x dx ≤ M b −a

11 143

a

Division of all three sides by b − a

0 converts Eq. (11.143) to

b

f x dx m≤

a

≤ M,

b−a

11 144

b

f x dx

where a b− a obviously represents some real number belonging to [m,M]; Bolzano’s theorem, see Eq. (9.146), then guarantees existence of (at least) one value of c [ f −1{m}, f −1{M}] ⊂ [a,b] such that b

f x dx f c =

a

b− a – which degenerates to Eq (11.140), after both sides are multiplied by b − a. 11.1.2.2.2

11 145

First Fundamental Theorem of Integral Calculus

Assume that f {x} denotes a continuous function in [a,b], as well as F{x} – defined as x

F x

f ξ dξ,

11 146

a

with a < x < b; F{x} accordingly represents the area under the graph of f {ξ}, when ξ is varied from a up to x > a, in agreement with Fig. 11.2 after setting N ∞. As a consequence of Eq. (11.146), one has it that x+h

F x+h = a

f ξ dξ,

11 147

393

394

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with F{x + h} likewise representing the area under the graph of f {ξ} when ξ spans [a,x + h], again with a < x + h < b. Ordered subtraction of Eq. (11.146) from Eq. (11.147) unfolds F x + h −F x =

x+h

f ξ dξ −

a

x

f ξ dξ,

11 148

a

so sequential combination with Eqs. (11.104) and (11.124) gives rise to F x + h −F x =

a

f ξ dξ +

x

x+h

f ξ dξ =

a

x+h

f ξ dξ;

11 149

x

according to Eq. (11.140), there is at least a number c, located between x and x + h, such that x+h

f ξ dξ =

x + h −x f c = hf c

x

11 150 x≤c≤x+h

On the other hand, Eq. (10.21) supports dF x dx

lim

h 0

F x + h −F x , h

11 151

so insertion of Eq. (11.149) unfolds x+h

dF x = lim h 0 dx

f ξ dξ

x

h

;

11 152

combination with Eq. (11.150) permits further transformation of Eq. (11.152) to dF x hf c = lim = lim f c h 0 h 0 dx h

11 153

Note the double inequality satisfied by c as mentioned in Eq. (11.150), i.e. x ≤ c ≤ x + h;

11 154

this is to be complemented with lim x = x

11 155

h 0

as per Eq. (9.30) – since x is a constant with regard to h, and further with lim x + h = x,

h 0

11 156

as guaranteed by Eq. (9.47) because h is an infinitesimally small. Therefore, one realizes that the conditions of validity of Eq. (9.121) are satisfied – as per Eq. (11.154), coupled with limh 0 x = limh 0 x + h = x as per Eqs. (11.155) and (11.156), so one can state lim c = x;

11 157

h 0

consequently, lim f c = lim f c = f x

h 0

c

x

11 158

Integrals

owing to the definition of continuity as per Eq. (9.138). One finally concludes that d dx

x

f ξ dξ = f x ,

11 159

a

following combination of Eqs. (11.146), (11.153), and (11.158); the above result means x that a f ξ dξ plays the role of an indefinite integral of f {x}, in agreement with x

f ξ dξ being a function rather than a number, because Eqs. (11.1) and (11.6) – with a the upper limit of integration is itself a variable. 11.1.2.2.3

Second Fundamental Theorem of Integral Calculus

Consider finally a function f {x}, continuous within closed interval [a,b]; if the nomenclature underlying Eq. (11.1) still holds, then b

f x dx = F b − F a

a

F x

b a

11 160

This theorem is, on its own, also referred to as fundamental theorem of integral calculus, or Newton and Leibnitz’s theorem; although such scientists never conveyed Eq. (11.160) as such or in equivalent form, they entertained major contributions to the calculation of definite integrals by hypothesizing the nuclear link between integration and differentiation. To prove the above theorem, one may start by defining function ψ{x} as ψ x

x

f ξ dξ;

11 161

a

in view of Eq. (11.133), one realizes that ψ a

a

f ξ dξ = 0,

11 162

f ξ dξ

11 163

a

while ψ b

b a

– both stemming directly from the definition conveyed by Eq. (11.161). Furthermore, Eqs. (11.159) and (11.161) assure that dψ x =f x dx – while Eq. (11.6) indicates that dF x =f x , dx in general; furthermore, Eq. (11.16) has it that F x −ψ x = k,

11 164

11 165

11 166

since ψ{x} is an indefinite integral of f {x}, as is also the case of F{x}. For x = a, one obtains k = F a −ψ a

11 167

395

396

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

from Eq. (11.166), where insertion of Eq. (11.162) allows simplification to k =F a ;

11 168

by the same token, one finds that ψ b = F b −k

11 169

after setting x = b in Eq. (11.166), with insertion of Eq. (11.163) unfolding b

f ξ dξ = F b −k

11 170

a

Owing to Eq. (11.168), one finally obtains b

f ξ dξ = F b −F a

11 171

a

from Eq. (11.170) – which retrieves Eq. (11.160), since ξ is a dummy variable that may be replaced by any other variable, say, x. It is instructive here to revisit the major result of Lagrange’s theorem, as per Eq. (10.274), in the form df x dx

= x=c

f b −f a = b−a

f x

b

−f x

x −x x

b a

f x

a b

a

x

11 172

a

with a < c < b; in view of Eq. (11.160), one may transform Eq. (11.172) to b

b df x df x dx dx df x dx dx = a b = a b , dx x = c dx dx dx a dx a a result consistent with Eq. (11.6). If an auxiliary variable g{x} is defined via

df x , dx then Eq. (11.173) can be rewritten as g x

11 173

11 174

b

g x dx g x

x=c

=

a

;

b

11 175

dx a

the right-hand side of Eq. (11.175) is but the definition of average of a continuous function within interval [a,b] – and justifies why Lagrange’s theorem is commonly referred to as theorem of the mean, given specifically by g{x} evaluated at some c [a,b] (or g{x}|x = c) when x in g{x} spans [a,b].

11.1.2.3 Reduction Formulae

Integration by parts, as introduced above, proves also useful method to calculate definite integrals – and may be directly adapted from Eq. (11.39), after taking Eq. (11.160) on board, i.e.

Integrals b

f1 f2 dx = f1 F2 − F2

a

b

df 1 dx dx

11 176 a

which reads b

b

f1 x f2 x dx = f1 x F2 x a

a



b

F2 x a

df 1 x dx dx

11 177

using an expanded notation. By the same token, Eq. (11.48) takes the form ϕ −1 b

b

f x dx = a

ϕ −1 a

f ϕ ξ

dϕ ξ dξ, dξ

11 178

when a definite integral is at stake and Eq. (11.46) applies; the right-hand side does not require (for being a constant) any further manipulation arising from the change of variable, unlike happens with Eq. (11.48) – which still has to be combined with Eq. (11.46) as per ξ ≡ ϕ−1{x}, to retrieve the original variable (for being a function). The method conveyed by Eq. (11.177) may be iterated as needed, leading to the so-called reduction formulae; one of the most useful of such formulae was originally derived by John Wallis, a mathematician of the seventeenth century – and pertains to the integral of even powers of cosine, between 0 and π/2. In fact, after splitting the original power as cos 2n θ dθ = cos 2n−2 θ cos2 θ dθ,

11 179

one may bring along the fundamental theorem of trigonometry as per Eq. (2.442) to write cos 2n θ dθ = cos 2n−2 θ 1 − sin2 θ dθ = cos 2n−2 θ dθ − cos 2n−2 θ sin2 θ dθ 11 180 – also with the aid of the distributive property of multiplication of scalars, complemented with Eq. (11.22); integration of the second term in Eq. (11.180) may now proceed by parts, viz. cos 2n θ dθ = cos 2n− 2 θ dθ −

cos 2n −2 θ sin θ sin θ dθ

= cos 2n− 2 θ dθ − − = cos 2n− 2 θ dθ +

cos 2n −1 θ sin θ − 2n −1



cos 2n− 1 θ cos θ dθ , 2n− 1

sin θ cos 2n−1 θ 1 − cos 2n θ dθ 2n −1 2n− 1 11 181 2n − 2

θ, complemented by Eqs. at the expense of Eq. (11.39) with f1 ≡ sin θ and f2 ≡ cos (10.44), (10.48), and (10.205). The last integral in the right-hand side is similar to that in its left-hand side, so Eq. (11.181) may be redone as 1+

1 2n − 1

cos 2n θ dθ = cos 2n −2 θ dθ +

upon collapsing terms alike, or else

sin θ cos 2n− 1 θ 2n −1

11 182

397

398

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

2n sin θ cos 2n−1 θ cos 2n θ dθ = cos 2n− 2 θ dθ + 2n − 1 2n − 1

11 183

after lumping terms in parenthesis; cos2nθ dθ may finally be isolated to give cos 2n θ dθ =

2n − 1 2n −1 sin θ cos 2n−1 θ cos 2n− 2 θ dθ + 2n 2n 2n −1

2n − 1 sin θ cos 2n−1 θ = cos 2n− 2 θ dθ + 2n 2n

,

11 184

along with straightforward algebraic rearrangement. Following integration between 0 and π/2, Eq. (11.184) turns to π 2

2n −1 cos θ dθ = 2n 0 2n

π 2

cos

2n− 2

0

sin θ cos 2n −1 θ θ dθ + 2n

π 2

11 185

0

in the line of Eqs. (11.160) and (11.177) – which breaks down to π 2

2n −1 cos θ dθ = 2n 0 2n

π 2

cos 2n− 2 θ dθ +

0

sin θ cos 2n −1 θ sin θ cos 2n− 1 θ − , π 2n 2n 0

11 186

2

or else π 2

2n − 1 cos θ dθ = 2n 0 2n

= =

2n − 1 2n 2n − 1 2n

π 2 0 π 2 0 π 2

cos

2n−2

π π sin cos 2n− 1 2n −1 0 2 2 − sin 0 cos θ dθ + 2n 2n

cos 2n−2 θ dθ +

1 02n− 1 0 12n− 1 − 2n 2n

;

11 187

cos 2n−2 θ dθ

0 π 2

a recursive relationship has thus arisen, where

0

π 2

cos 2n θ dθ is obtained at the expense of

cos 2n−2 θ dθ, following correction by factor (2n − 1)/2n. The latter integral in Eq. (11.187) may, in turn, be constructed as 0

π 2

cos

2n −2

0

π 2

2n − 2 −1 θ dθ = 2n − 2

cos

2n−2 − 2

0

2n −3 θ dθ = 2n −2

π 2

cos 2n− 4 θ dθ

11 188

0

after having replaced 2n by 2n − 2 in Eq. (11.187); insertion of Eq. (11.188) transforms Eq. (11.187) to π 2

2n −1 2n −3 cos θ dθ = 2n 2n −2 0 2n

π 2

cos 2n− 4 θ dθ

11 189

0

– and this process may be iterated up to n times, until attaining π 2 0

cos 2n θ dθ =

2n −1 2n −3 2n −5 2n 2n −2 2n −4

31 42

π 2 0

dθ,

11 190

Integrals

where the first entry of Table 11.1 for n = 0 eventually leads to π 2

π

n

cos

2n

0

2i− 1 2 θ = θ dθ = 2i 0 i=1

n

2i− 1 θ −θ 2i π i=1

11 191

0

2

Equation (11.191) reduces to just π 2

cos 2n θ dθ =

0

π n 2i− 1 2 i = 1 2i

11 192

– usually known as Wallis’ formula; it is sometimes redone to n π 2

n

2i− 1

cos

2n

0

π θ dθ = i = 1 n 2

n

2i− 1

π = i =n1 2

2i

i=1

n

2

i=1

i

2i −1 π i=1 , = 2 2n n

11 193

i=1

based on the definition of power and factorial. π 2

Finally, it should be emphasized that cos 2n θ d θ may also be accessible via the angle 0 transformation formula labeled as Eq. (2.397), viz. cos 2n θ =

1 4n

n− 1

2n n

+2

2n n

+2

j=0

2n cos 2n− 2j θ , j

11 194

2n cos 2iθ n −i

11 195

or, equivalently, cos 2n θ =

1 4n

n i=1

after replacing the counting variable j by i ≡ n − j. Upon integration with the aid of Eq. (11.22), one can transform Eq. (11.195) to cos 2n θ dθ =

1 4n

1 = n 4

2n n

n

2n

+2

n−i

i=1 n

2n

2n

dθ + 2

n

cos2iθ dθ

i=1

n−i

,

11 196

cos 2iθ dθ

where calculation of the outstanding integrals at the expense of Eqs. (10.29), (10.44), and (10.205) gives rise to cos 2n θ dθ =

1 4n

n

2n θ+2 n i=1

1 2n sin 2iθ = n n−i 2i 4

2n θ+ n

n i=1

2n sin 2iθ ; n− i i 11 197

integration between 0 and π/2 then unfolds π 2 0

cos

2n

1 θ dθ = n 4

2n θ+ n

n i=1

2n sin 2iθ n −i i

π 2

11 198 0

399

400

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in view of Eq. (11.160), where straightforward algebraic manipulation eventually generates π 2

cos

2n

0

π n n 2n 2n π 2n sin 2i 2 2n sin 0 − + 0− n n 2 i = 1 n −i n− i i i i=1

1 θ dθ = n 4 =

2n π + n 2

1 4n

n

1 2n π 2n sin iπ = n n −i n 2 i 4

i=1

11 199 – since sin 0 = sin iπ = 0 for i = 1, 2, … . Recalling the mode of calculation of binomial coefficients as per Eq. (2.240) and the rule of calculation of composite powers, one may rewrite Eq. (11.199) as π 2

cos 2n θ dθ =

0

1 2n π 1 2n π = 2n , 2 n n 2n − n 2 2 nn 2 2

11 200

where the factorial of 2n may, in turn, be splitted as two extended products of sequential even and odd integers, i.e. n π 2 0

cos

2n

n

1 θ dθ = 2n i = 1 2

2i −1

n

2i i=1

nn

n

π 1 = n n i=1 2 2 2

2

n

i i=1 i=1

nn

π 1 = n 22 n

2 n

2i− 1 i=1 n

π 2 11 201

n n

2i− 1

2 n

=

π 1 2 2n n

n

2i −1 i=1

– where straightforward algebraic transformations meanwhile took place, including n n n splitting i = 1 2i as the product of i = 1 2 by i = 1 i; Eq. (11.201) coincides with Eq. (11.193), as expected.

11.2

Multivariate Integral

11.2.1

Definition

11.2.1.1 Line Integral

Suppose y ≡ f {x} denotes a real, single-valued, monotonic, and continuous function of x within some interval [x1,x2], as represented by curve C of analytical equation f {x} in Fig. 11.3 – with endpoints A and B, accordingly described by coordinates (x1, f {x1}) and (x2, f {x2}), respectively. If P{x,y} and Q{x,y} are two real, single-valued, and continuous functions of x and y for all points of C, then either integral Cx A B P x, y dx or Cy A B Q x, y dy is termed line (or curvilinear) integral – with integration taking place between A and B, along curve C. Since y can be expressed in terms of x via f {x}, this is equivalent to an ordinary integral with regard to x, i.e.

Integrals

Figure 11.3 Graphical representation of continuous function f {x} on the x0y plane via curve C, departing from point A of coordinates (x1, f{x1}), and reaching point B of coordinates (x2, f{x2}).

B

f{x2}

y

C

f{x1}

A

0

x1

x2 x

x2

P x, y dx = Cx A

B

P x, f x

11 202

dx,

x1

and likewise x2

Q x, y dy = Cy A

B

Q x, f x x1

df x dx; dx

11 203

note the convenient multiplication and division of the kernel in Eq. (11.203) by dx as per the definition of differential of f, with dy/dx coinciding with df/dx. Line integrals may be evaluated in two directions, since the path may be traversed from A to B or from B to A; due to their equivalence to ordinary integrals, one may write P x, y dx = − Cx A

B

x1

P x, f x

dx = −

x2

P x, y dx

11 204

Cx B A

stemming from Eqs. (11.104) and (11.202) – and likewise Q x, y dy = − Cy A

B

x1

Q x, f x x2

df x dx = − dx

Q x, y dy,

11 205

Cy B A

upon combination of Eq. (11.104) with Eq. (11.203). One promptly realizes that the value of a line integral depends on both x1 and x2; when such limits of integration are fixed, the value of the line integral depends, in principle, on the path chosen between them – although this is not always the case. Consider, in this regard, F{x,y}, such that its partial derivatives with regard to x and y are single-valued, continuous functions of x and y; and consider also curve C – defined parametrically by x ≡ f {t} and y ≡ g{t}, such that A(x1, y1) ≡ A{t1} and B(x2, y2) ≡ B{t2} account for the initial and final points of integration along C. In view of Eq. (10.6), the line integral of the total differential coefficient reads dF = A

B

A t2

= t1

dF dt = B dt

A

B

∂F dx ∂F dy dt + ∂x dt ∂y dt

∂F dx ∂F dy dt + dt = ∂x dt ∂y dt

t2 t1

∂F ∂F dx + dy ∂x ∂y

,

11 206

401

402

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where the kernel was meanwhile multiplied and divided by dt; recalling Eqs. (11.21) and (11.160), one may redo Eq. (11.206) to dF A

B

F

t2 t1

=F

t2

−F

t1

=F

x2 ,y2

−F

x1 ,y1

,

11 207

which implies that A BdF depends only on the initial and final points, A and B, respectively, of the integration path – and not on its particular shape. Inspection of the form of Eq. (11.206) indicates that a function F{x,y} – simultaneously satisfying ∂F x, y = P x, y ∂x

11 208

∂F x, y = Q x, y , ∂y

11 209

and

turns

Cx, y

P x, y dx + Q x, y dy independent of path Cx,y, in view of Eq. (11.207).

Closed line (or cycle) integrals arise when the final and initial points of the integration path do coincide, i.e. P x, y dx + Q x, y dy Cx, y

P x, y dx + Q x, y dy A

B

11 210

A

If the conditions underlying Eq. (11.206) hold, then dF = 0

11 211

Cx, y

is enforced by Eq. (11.207), analogous to Eq. (11.133) but in two dimensions – because the initial and final points of the integration path coincide; in this case, F is termed an exact differential. If P{x,y} and Q{x,y} are still differentiable with regard to x and y, then one obtains ∂P x, y ∂ ∂F x, y = ∂y ∂y ∂x

=

∂ 2 F x, y ∂y∂x

11 212

=

∂ 2 F x, y ∂ 2 F x, y = ∂x∂y ∂y∂x

11 213

from Eq. (11.208), and similarly ∂Q x, y ∂ ∂F x, y = ∂x ∂x ∂y

from Eq. (11.209) – where the latter already took advantage of Young’s (or Schwarz’s) theorem as per Eq. (10.65). Elimination of ∂ 2F{x,y}/∂y∂x between Eqs. (11.212) and (11.213) gives rise to ∂P x, y ∂Q x, y = ∂y ∂x

11 214

as necessary and sufficient condition for independency of path – or else P x, y dx + Q x, y dy = 0 Cx, y

as a more useful form.

∂P x, y ∂Q x, y ∂y = ∂x

11 215

Integrals

11.2.1.2 Double Integral

A double integral may be geometrically defined in much the same way Riemann defined a single integral via Eq. (11.90), i.e. b g2 x a g1 x

N

f x, y dydx

lim

Δx Δy

M

0 i=1 j=1 0

f xi , yj ΔyΔx

b −a , N b −a g1 a + i + N b−a g2 a + i N

a+i N

M

= lim N M

∞ ∞

f i=1

j=1

−g1 a + i j

b− a N b−a a+i N M

g2 a + i − g1

b− a ; N

b− a N

M 11 216

a graphical interpretation is provided in Fig. 11.4a. Inspection of this diagram unfolds two basic ways of spanning the integration region, Dx,y – either having x vary between a and b and then y between g1{x} and g2{x} at every such x, or else having y as independent variable and then x span the range [g1−1{y},g2−1{y}]. Hence, calculation of a double integral must proceed stepwise – with the inner integral being computed before the outer integral; (a)

(b) g2{b}

g2{b}

g1{b}

g1{b} dy

D2,x,y y

y

Dx,y

D1,x,y

g2{a}

g2{a}

g1{a} 0

g1{a}

dx a

b x

0

a

b x

Figure 11.4 Graphical representation of functions g1{x} and g2{x} ( __ ) on the x0y plane, both departing from abscissa a but ordinates g1{a} or g2{a}, respectively, and attaining abscissa b and ordinates g1{b} or g2{b}, respectively, serving as lower and upper boundaries for the integration region – with indication of ) with width dx (for a ≤ x ≤ b) and height dy = g2{x} − g1{x}, and (a) elementary rectangle ( elementary rectangle ( ) with height dy and width g1−1{y} − g2−1{y}, and (b) partition ( __ ) of integration domain Dx,y between two subdomains, D1,x,y and D2,x,y.

403

404

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

a similar situation occurred with the second-order derivative, calculated as the derivative of another derivative – see Eq. (10.49). In view of the underlying definition of a double integral resorting to summations – as done in Eq. (11.216) coupled with Eq. (9.73), one realizes that N

b g x 2

c1 f x, y + c2 g x, y dydx

a g1 x

N

0 0

Δx Δy

M

M

0 i=1 j=1 0 N M

f xi ,yj ΔyΔx + lim c2

= lim c1 Δx Δy

lim

Δx Δy

i=1 j=1

0 0

c1 f xi ,yj + c2 g xi , yj

ΔyΔx

g xi , yj ΔyΔx i=1 j=1

11 217 with c1 and c2 denoting constants – and, consequently, c1 f x, y + c2 g x, y dxdy = c1

f x, y dxdy + c2

Dx, y

Dx, y

g x, y dxdy; Dx, y

11 218 Eq. (11.218) mimics Eq. (11.102) pertaining to a single integral. Furthermore, if the integration domain Dx,y is exactly partitioned in D1,x,y and D2,x,y as illustrated in Fig. 11.4b, then b N

g2 x

f x, y dydx a

g1 x N

= lim Δx Δy

lim

Δx Δy

0 i=1 0

P j=1

P

0 i=1 j=1 0

M

f xi ,yj + N

f xi , yj ΔyΔx + lim Δx Δy

ΔyΔx

f xi ,yj j=P+1

(11.219)

M

0 i=1 j=P+1 0

f xi ,yj ΔyΔx

again at the expense of Eq. (9.73) – with (integer) P lying somewhere between 1 and M (exclusive); after taking limits as indicated, Eq. (11.219) becomes f x, y dxdy = Dx, y

f x, y dxdy + D1, x, y

f x, y dxdy D2, x, y

D 1 , x, y D 2 , x, y = D x, y D 1 , x, y D 2 , x, y = Ø

11 220 with the aid of Eq. (11.216) – provided that D1,x,y and D2,x,y are mutually exclusive and jointly inclusive.

11.2.2

Basic Theorems

11.2.2.1 Fubini’s Theorem

The definition of double integral,

Dx, y

f x, y dxdy, as conveyed by Eq. (11.216),

assumes implicitly that it is expressible as an integral (in, say, the x-direction) of another

Integrals

integral (in, say, the y-direction); the issue remains, however, of which direction to use for the first (or the second) integral. Consider first the case where the limits of integration in both directions are constant or, equivalently, with an integration domain of rectangular shape; this means that g1{x} = c and g2{x} = d in Eq. (11.216), with c and d denoting constants. The associated double integral will accordingly read b

d

a

c

f x, y dxdy

11 221

f x, y dy dx

Dx, y

– in which case integration in y may proceed via Eq. (11.160) as b

f x, y dxdy = Dx, y

Fy x, y a

y=d

− Fy x, y

y=c

dx 11 222

;

b

Fy x, d −Fy x, c dx

= a

note that Fy{x,y} is such that ∂Fy x, y ∂y

f x, y

11 223

A further integration in x transforms Eq. (11.222) to b

f x, y dx dy = Dx, y

Fy x, d dx −

a

b

Fy x, c dx a

b

= Fyx x, d = Fyx x, d

x=a

x=b

−Fyx x, c − Fyx x, d

b x=a

x=a

− Fyx x, c

x=b

−Fyx x, c

x=a

= Fyx b, d − Fyx a, d − Fyx b, c + Fyx a, c 11 224 at the expense of Eqs. (11.102) and (11.160) – as long as Fyx{x,y} satisfies Fy x, y

∂Fyx x, y ∂x

11 225

Consider now the reverse order of integration, i.e. d

b

c

a

f x, y dxdy

11 226

f x, y dx dy

Dx, y

en lieu of Eq. (11.221) – where integration in x within the inner kernel gives rise to d

f x, y dxdy = Dx, y

Fx x, y c d

= c

x=b

− Fx x, y

Fx b, y − Fx a, y dy

x=a

dy ,

11 227

405

406

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

again at the expense of Eq. (11.160); here Fx{x,y} abides to ∂Fx x, y ∂x

f x, y

11 228

Application again of Eqs. (11.102) and (11.160) to Eq. (11.227) unfolds d

f x, y dxdy = Dx, y

d

Fx a, y dy = Fxy b, y

Fx b, y dy− c

c

= Fxy b, y

y=d

− Fxy b, y

y=c

d y=c

− Fxy a, y

−Fxy a, y

y=d

− Fxy a, y

d y=c

, y=c

= Fxy b, d − Fxy b, c −Fxy a, d + Fxy a, c 11 229 with Fxy{x,y} satisfying Fx x, y

∂Fxy x, y ∂y

11 230

One may now revisit Eq. (11.223) with the aid of Eq. (11.225), viz. f x, y =

∂ ∂Fyx x, y ∂y ∂x

=

∂ 2 Fyx x, y , ∂y∂x

11 231

and likewise Eq. (11.228) following insertion of Eq. (11.230), i.e. f x, y =

∂ ∂Fxy x, y ∂x ∂y

=

∂ 2 Fxy x, y ; ∂x∂y

11 232

elimination of f {x,y} between Eqs. (11.231) and (11.232) gives rise to ∂ 2 Fyx x, y ∂ 2 Fxy x, y = ∂y∂x ∂x∂y

11 233

According to Young’s, or Schwarz’s theorem, the order of differentiation in second-order cross derivatives is irrelevant, provided that the function under scrutiny is continuous and differentiable up to second order; hence, Eq. (11.233) may be rewritten as ∂ ∂Fyx x, y ∂x ∂y

=

∂ ∂Fxy x, y ∂x ∂y

11 234

using ∂/∂x as outer derivative in both sides, or instead as ∂ ∂Fyx x, y ∂y ∂x

=

∂ ∂Fxy x, y ∂y ∂x

11 235

via highlighting differentiation with regard to y, following differentiation with regard to x. After getting rid of dx in both sides, Eq. (11.234) can be integrated as

Integrals

d

∂Fyx x, y ∂y

= d

∂Fxy x, y ∂y

11 236 y

under a constant y as implicit in ∂/∂x in the first place. Equation (11.236) gives rise to ∂Fyx x, y ∂Fxy x, y = + h1 y ∂y ∂y

11 237

at the expense of Eqs. (11.16) and (11.21) – where h1{y} denotes a univariate (arbitrary) function on y, playing the role of constant (dependent on the chosen x), since integration in x (as was the case here); a second integration, now in y, allows transformation of Eq. (11.237) to dF yx x, y = dF xy x, y + h1 y dy

11 238

after multiplying both sides by dy and recalling Eq. (11.22) – or else Fyx x, y = Fxy x, y + H1 y + k1

11 239

with the aid of Eqs. (11.1), (11.16), and (11.21), where H1{y} denotes the integral (obviously in y) of h1{y} and k1 denotes a (true) constant. A similar reasoning may now depart from Eq. (11.235) as d

∂Fyx x, y ∂x

= d

∂Fxy x, y ∂x

,

11 240

x

obtained after multiplying both sides by dy and integrating in y along constant x; direct application of Eqs. (11.16) and (11.21) gives rise to ∂Fyx x, y ∂Fxy x, y = + h2 x ∂x ∂x

11 241

– where h2{x} plays the role of constant (yet dependent on the x chosen), since integration took place along y. Both sides of Eq. (11.241) may now be integrated in x to produce dF yx x, y = dF xy x, y + h2 x dx,

11 242

following multiplication by dx and recalling Eq. (11.22); an extra application of Eqs. (11.1), (11.16), and (11.21) yields Fyx x, y = Fxy x, y + H2 x + k2 ,

11 243

such that ∂H2{x}/∂x = h2{x} and k2 stands for another (true) constant. Ordered subtraction of Eq. (11.243) from Eq. (11.239) generates Fyx x, y − Fyx x, y = Fxy x, y − Fxy x, y + H1 y − H2 x + k1 − k2

11 244

that breaks down to H1 y + k1 = H2 x + k2 = κ

11 245

407

408

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

after dropping symmetrical terms – where κ denotes an arbitrary constant; this is in fact the only possibility compatible with Eq. (11.244), since no restriction whatsoever has been imposed on either x or y – and the left-hand side depends only on y, while the right-hand side depends only on x. Therefore, one concludes that Fyx x, y = Fxy x, y + κ

11 246

from insertion of Eq. (11.245) in either Eq. (11.239) or Eq. (11.243). Once in possession of Eq. (11.246), one may reformulate Eq. (11.224) to f x, y dxdy = Fxy b, d + κ − Fxy a, d + κ − Fxy b, c + κ + Fxy a, c + κ , Dx, y

11 247 where elimination of parentheses followed by cancellation of each κ with its negative supports simplification to f x, y dxdy = Fxy b, d − Fxy a, d − Fxy b, c + Fxy a, c

11 248

Dx, y

One finally concludes that b

d

f x, y dxdy = Dx, y

d

b

c

a

f x, y dy dx = a

c

f x, y dx dy

11 249

with the aid of Eqs. (11.221), (11.226), and (11.229) – which, in essence, constitutes the weak version of Fubini’s theorem, introduced by Guido Fubini, an Italian mathematician, in 1907. Equation (11.249) basically indicates that the order of integration of a double integral is redundant, at least when the region of integration is a rectangle (with sides of length b − a and d − c); in a sense, this is the integral analogue to Young’s (or Schwarz’s) theorem conveyed by Eq. (10.65), pertaining to the equivalence of cross derivatives of a given (continuous) function. A stronger version of Fubini’s theorem can be formulated, encompassing an integration domain Dx,y not restricted to a rectangular shape, according to b

f x, y dxdy = Dx, y

a

g2 x g1 x

d

h2 y

c

h1 y

f x, y dy dx =

f x, y dx dy;

11 250

in either case, the outer integral is evaluated between (constant) lower and upper boundaries, say, a (or c) and b (or d), respectively – whereas the inner integral is, for each value of the outer integration variable, evaluated between (variable) lower and upper boundaries, say, g1{x} (or h1{y}) and g2{x} (or h2{y}), respectively. The proof of Eq. (11.250) comes at the expense of Eq. (11.220), applied separately to any number of rectangular integration domains – after realizing that a domain Dx,y, of any shape, can be decomposed into n (sufficiently) smaller rectangular domains D1,x,y, D2,x,y, …, Di,x,y, …, Dj,x,y, …, Dn,x,y, such that D1,x,y D2,x,y Di,x,y Dj,x,y Dn,x,y = Dx,y, and Di,x,y Dj,x,y = 0 for every i = 1, 2, …, n and j = 1, 2, …, i − 1, i + 1, …, n.

Integrals

11.2.2.2 Green’s Theorem

Suppose P{x,y} and Q{x,y} denote two bivariate functions, which take finite values and are continuous inside some domain Dx,y and on the boundary Cx,y of said region on the x0y plane – lower bounded by curve y1 ≡ y1{x} within [a,b] and upper bounded by y2 ≡ y2{x} within interval [b,a]. If the first partial derivatives of P{x,y} and Q{x,y} exist and are continuous in Dx,y and on Cx,y, then

Dx, y

∂P x, y ∂Q x, y − ∂y ∂x

dxdy = −

P x, y dx + Q x, y dy ;

11 251

Cx, y

this is known as Green’s theorem, in honor to George Green, a British mathematical physicist of the nineteenth century – and applies to essentially all functions of practical interest in process engineering, for which Cx,y represents a simple, smooth curve. Note that validity of Eq. (11.214) transforms Eq. (11.251) into Eq. (11.215); therefore, Eq. (11.251) consubstantiates an intrinsically more general statement. To prove Eq. (11.251), one may depart from Fubini’s theorem, i.e. Eq. (11.250), coupled to Eqs. (11.21) and (11.160) to express the double integral of ∂P{x,y}/∂y within Dx,y, defined by a ≤ x ≤ b, as ∂P x, y dxdy = ∂y Dx, y

b

y2 x

a

y1 x

b

=

P x, y a

∂P x, y dy dx = ∂y y2 x y1 x

b

y2 x

a

y1 x

b

dx =

P x, y a

y2 x

dP x, y

dx − P x, y

dx

y1 x

dx 11 252

– along with convenient dropping of dy between numerator and denominator. Equation (11.102) pertains to decomposition of a definite integral, coupled to the definition of a cycle integral as per Eq. (11.210) – i.e. an integral evaluated along a forward trajectory defined by y1{x} followed by a reverse trajectory defined by y2{x} until reaching again the departure point; it may then be used to write ∂P x, y dxdy = − ∂y Dx, y =−

b

P x, y a

y1 x

dx −

y2 x

dx

a

P x, y

=−

P x, y a

b a

b

y1 x

dx +

P x, y b

y2 x

dx ,

11 253

P x, y dx Cx, y

where Eq. (11.104) was meanwhile employed for convenience. By the same token, the double integral where ∂Q{x,y}/∂x plays the role of integrand function, Dx,y that spans c ≤ y ≤ d, and the forward and reverse trajectories that are described by x2{y} and x1{y}, respectively, may be equated as

409

410

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

∂Q x, y dxdy = ∂x Dx, y

d

x1 y

c

x2 y

d

x1 y

=

x2 y

c

∂Q x, y dx dy ∂x d

dQ x, y

Q x, y c

x1 y

Q x, y

x2 y

c

d

=

dy =

x1 y

− Q x, y

x2 y

11 254

dy ,

dy

with the aid again of Eqs. (11.21), (11.160), and (11.250); one then obtains ∂Q x, y dxdy = ∂x Dx, y

d

Q x, y c

x1 y

dy −

Q x, y

x2 y

c

d

=

d

dy

c

Q x, y c

=

x1 y

dy +

Q x, y

x2 y

d

11 255

dy

Q x, y dy Cx, y

from Eq. (11.254), upon application of Eqs. (11.102), (11.104), and (11.210). Ordered subtraction of Eq. (11.255) from Eq. (11.253) gives rise to ∂P x, y dxdy − ∂y Dx, y

∂Q x, y dxdy = ∂x Dx, y

∂P x, y ∂Q x, y − ∂y ∂x

Dx, y

= −

P x, y dx − Cx, y

= −

dxdy

Q x, y dy , Cx, y

P x, y dx + Q x, y dy Cx, y

11 256 also at the expense of Eqs. (11.102) and (11.218) – which retrieves Eq. (11.251). One particular case of Green’s theorem corresponds to 1 − −1 dxdy = 2 Dx , y

dxdy = − Dx , y

ydx − xdy Cx, y

11 257 P x, y Q x, y

y −x

– stemming directly from Eq. (11.251), after setting P{x,y} equal to y and Q{x,y} equal to −x; division of all sides by 2 eventually yields ACx, y =

dxdy = Dx, y

1 2

xdy − ydx , Cx, y

where ACx, y denotes the area of plane enclosed by contour Cx,y.

11 258

Integrals

11.2.3

Change of Variables

Remember that Eqs. (11.48) and (11.178) justify change of integration variable x ≡ ϕ{u} to u in a simple integral as f x dx = Dx

f x u Du

dx du du

11 259

– where multiplication and division of the kernel by du as differential of the new variable u automatically took place, besides explicitation of the original variable x in terms of u as argument of the integrand function f; the integration domains in x and u, denoted by Dx and Du, respectively, abide to Du

Dx

a, b

ϕ −1 a , ϕ −1 b

11 260

A similar rule may be applied to double integrals – from variables x ≡ ϕ{u, v} and y ≡ ψ{u, v}, to variables u and v, according to f x, y dxdy = Dx, y

f x u, v , y u, v Du, v

∂ x, y dudv ∂ u, v

11 261

– where dx/du in Eq. (11.259) has been replaced in Eq. (11.261) by the modulus of the ∂ x, y Jacobian determinant of (x,y) relative to (u,v), i.e. , stemming from Eq. (10.483). ∂ u, v For a typical domain of integration, say, Dx

a, b

11 262

Dy

y1 x , y2 x

11 263

Dy

c, d

11 264

Dx

x1 y , x2 y ,

11 265

and

– or else

and

the domain of integration on the u0v plane, Du,v, corresponding to the original domain of integration Dx,y on the x0y plane, will have to be found based on u

Φ x, y

11 266

v

Ψ x, y ,

11 267

and

respectively; here Φ and Ψ are obtained via solution, for u and v, of the set of equations x ≡ ϕ{u, v} and y ≡ ψ{u, v}.

411

412

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

To derive Eq. (11.261) with a stronger mathematical basis, one should first define F{x,y} via ∂F x, y ∂x

f x, y

11 268

– where F{x,y} applies in, and on the boundary of Dx,y; the left-hand side of Eq. (11.261) may then appear as f x, y dxdy = Dx, y

Dx, y

∂F x, y dxdy = ∂x

F x, y dy,

11 269

Cx, y

after inserting Eq. (11.268) and recalling Green’s theorem as per Eq. (11.251) – with P{x,y} = 0 and thus ∂P{x,y}/∂y = 0, where Cx,y denotes the contour (or boundary) line of Dx,y. The cycle integral in Eq. (11.269), pertaining to the x0y plane, may be rewritten as a cycle integral on the u0v plane as F x, y dy = ± Cx, y

F x u, v , y u, v Cu, v

∂y ∂y du + dv ∂u ∂v

11 270

– after making x and y explicit in u and v as per their original definition, and recalling Eq. (10.6) to express the differential of y as a function of the differentials of u and v, via the corresponding partial derivatives, ∂y/∂u and ∂y/∂v. The ambiguity in sign apparent in Eq. (11.270) arises from the fact that the transformation consubstantiated in x ≡ x{u, v} and y ≡ y{u, v} may be such that Cx,y is traversed in the positive sense, while Cu,v may be traversed in either the positive or the negative sense. Equation (11.270) may be rewritten as F x, y dy = ± Cx, y

P u, v du + Q u, v dv ,

11 271

Cu, v

as long as auxiliary functions P{u,v} and Q{u,v} abide to P u, v

F x u, v , y u, v

∂y ∂u

11 272

Q u, v

F x u, v , y u, v

∂y , ∂v

11 273

and

respectively; Green’s theorem as per Eq. (11.251) can now be applied reversewise to generate F x, y dy = ± Cx, y

Cu, v

∂Q u, v ∂P u, v − ∂u ∂v

dudv,

11 274

starting from Eq. (11.271). In view of Eq. (11.273), one finds that ∂Q u, v ∂u

∂ ∂y ∂F x, y ∂y ∂ ∂y = F x, y + F x, y ∂u ∂v ∂u ∂v ∂u ∂v ∂F x, y ∂x ∂F x, y ∂y ∂y ∂2 y + + F x, y = ∂u∂v ∂x ∂u ∂y ∂u ∂v

11 275

Integrals

with the aid of Eqs. (10.119) and (10.205) applied to partial differentiation – and Eq. (11.272) similarly gives rise to ∂P u, v ∂v

∂ ∂y F x, y ∂v ∂u =

∂F x, y ∂y ∂ ∂y + F x, y ∂v ∂u ∂v ∂u

;

11 276

∂F x, y ∂x ∂F x, y ∂y ∂y ∂2 y + + F x, y ∂v∂u ∂x ∂v ∂y ∂v ∂u

=

ordered subtraction of Eq. (11.276) from Eq. (11.275) produces ∂Q u, v ∂P u, v ∂F x, y ∂x ∂F x, y ∂y ∂y ∂2 y + + F x, y − = ∂u∂v ∂u ∂v ∂x ∂u ∂y ∂u ∂v −

∂F x, y ∂x ∂F x, y ∂y ∂y ∂2 y + − F x, y ∂v∂u ∂x ∂v ∂y ∂v ∂u (11.277)

Equation (11.277) degenerates to ∂Q u, v ∂P u, v ∂F x, y ∂x ∂F x, y ∂y ∂y ∂F x, y ∂x ∂F x, y ∂y ∂y + − + − = ∂u ∂v ∂x ∂u ∂y ∂u ∂v ∂x ∂v ∂y ∂v ∂u + F x, y

=

∂2 y ∂2 y − ∂u∂v ∂v∂u

∂F x, y ∂x ∂F x, y ∂y ∂y ∂F x, y ∂x ∂F x, y ∂y ∂y + − + ∂x ∂u ∂y ∂u ∂v ∂x ∂v ∂y ∂v ∂u 11 278

with F{x,y} factored out – and since Young’s (or Schwarz’s) theorem guarantees redundance of the order of differentiation of cross derivatives, and thus ∂ 2y/∂u∂v − ∂ 2y/∂v∂u = 0; after factoring out ∂F/∂x or ∂F/∂y (as appropriate), cancelling out symmetrical terms, and recalling the definition of a second-order determinant as per Eq. (1.10), it is possible to redo Eq. (11.278) as ∂Q u, v ∂P u, v ∂F x, y − = ∂u ∂v ∂x

=

∂F x, y ∂x

∂x ∂y ∂x ∂y ∂F x, y − + ∂u ∂v ∂v ∂u ∂y

∂y ∂y ∂y ∂y − ∂u ∂v ∂v ∂u

∂x ∂y ∂u ∂u ∂x ∂y ∂v ∂v 11 279

413

414

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Once in possession of Eq. (11.279), one may insert it to transform Eq. (11.274) to ∂F x, y ∂x

F x, y dy = ± Cx, y

Cx , y

∂x ∂u ∂x ∂v

∂y ∂u dudv, ∂y ∂v

11 280

whereas combination with Eqs. (11.268) and (11.269), coupled with equivalence between Cx,y and Cu,v give rise to

f x, y dxdy = ± Dx, y

f x, y Cu, v

∂x ∂u ± ∂x ∂v

∂y ∂u ∂y ∂v

dudv;

11 281

if f {x,y} is deliberately set equal to unity, then the left-hand side of Eq. (11.281) represents the area of Dx,y that must be positive – so the right-hand side must exhibit a sign that ∂x ∂y ∂u ∂u turns it positive as well. This goal implies taking the absolute value of ± , thus ∂x ∂y ∂v ∂v fully justifying the form of Eq. (11.261). 11.2.4

Differentiation of Integral

Consider a definite integral of the form b y

I y

f x, y dx,

11 282

a y

where f {x,y} is an integrable function of x within the range a ≤ x ≤ b – and a{y} and b{y} are, in general, continuous and (at least once) differentiable functions of y, further to f also depending on y. Following application of Eq. (11.160), one may transform Eq. (11.282) to I y = F b y ,y −F a y ,y ,

11 283

where F denotes, in turn, an indefinite integral of f, i.e. F x, y

f x, y dx;

11 284

this is equivalent to saying ∂F x, y ∂x

f x, y ,

11 285

as per Eq. (11.6). If F{x,y} is continuous with regard to both its argument variables, x and y, this constitutes a sufficient (although not necessary) condition for the commutative property of crossed partial derivatives to hold, viz.

Integrals

∂ 2 F x, y ∂ 2 F x, y = , ∂x∂y ∂y∂x

11 286

in agreement with Eq. (10.65) – which is equivalent to writing ∂ ∂F x, y ∂x ∂y

=

∂ ∂F x, y ∂y ∂x

11 287

as per the definition of second-order derivative; hence, combination with Eq. (11.285) allows transformation of Eq. (11.286) to ∂ ∂F x, y ∂x ∂y

=

∂f x, y ∂y

11 288

Integration of Eq. (11.288) in x, i.e. ∂ ∂F x, y ∂x ∂y

dx =

∂f x, y dx, ∂y

11 289

leads – upon cancellation of dx between numerator and denominator of the kernel in the left-hand side, to d

∂F x, y ∂y

=

∂f x, y dx; ∂y

11 290

in view of Eq. (11.21), one may redo Eq. (11.290) to ∂F x, y ∂f x, y = dx ∂y ∂y

11 291

According to Eq. (11.291), one realizes that ∂F/∂y represents the indefinite integral, in x, of ∂f/∂y; Eq. (11.160) then supports ∂f x, y ∂F x, y dx = ∂y ∂y a b

b

= x=a

∂F b, y ∂F a, y − ∂y ∂y

11 292

One may now return to the initial definite integral, I, defined by Eq. (11.282); the total derivative of I with regard to y may, in view of Eq. (11.283), be calculated via the chain partial differentiation rule as dI y ∂F b, y ∂F b, y db ∂F a, y ∂F a, y da − = + − dy ∂y ∂b dy ∂y ∂a dy

11 293

– because a ≡ a{y} and b ≡ b{y} by hypothesis, where the rule of differentiation of a sum of functions as per Eq. (10.106) was meanwhile considered. Upon application of Eq. (11.285) twice in Eq. (11.293), after relabeling x as b or a, one gets dI y db da ∂F b, y ∂F a, y − f a, y + = f b, y − dy dy dy ∂y ∂y

11 294

– where combination with Eqs. (11.282) and (11.292) finally yields d dy

b y

f x, y dx = f b y , y a y

db y da y − f a y ,y + dy dy

b y a y

∂f x, y dx ∂y 11 295

415

416

Mathematics for Enzyme Reaction Kinetics and Reactor Performance b d b ∂f x, y dx as per f x, y dx reduces to dy a ∂y a Eq. (11.295) – i.e. the differential and integral operators become interchangeable.

If a and b are both constant, then

11.3

Optimization of Single Integral

Oftentimes in engineering, the nuclear issue is not optimizing the value of y{x} to find, say, yopt, associated with a specific value, xopt, of independent variable x – but instead finding the form of y{x} that itself optimizes some criterion, while abiding to given boundary condition(s). One typical example encompasses finding the function y{x} such that the integral x2

Iy

f x, y, x1

dy dx dx

11 296

attains an optimum – where f denotes some form of algebraic (and thus differentiable) functionality on x, y, and dy/dx. Furthermore, y

x1

= y1

11 297

= y2

11 298

and y

x2

are to serve as boundary conditions, where x1 and x2 are given (constant) limits of integration; and d2y/dx2 is supposed to also exist, and be continuous over interval [x1,x2]. Since I is a function of (another) function y, the former is usually termed functional; one accordingly ends up with a problem of calculus of variations (or theory of functionals), where the general aim is to find the stationary values of an integral with regard to a function. Suppose now that y{x} is any curve passing through both points (x1,y1) and (x2,y2); an arbitrary, although minute variation in said function that still passes through those endpoints may be expressed as y# x

y x + εη x ,

11 299

where ε denotes a parameter taking small values and η{x} denotes an arbitrary function satisfying η

x1

=0

11 300

=0

11 301

and η x2

as constraints. It is apparent that the new expression for y{x}, conveyed by Eq. (11.299) and denoted as y#{x}, represents an infinity of curves passing through the aforementioned endpoints (x1,y1) and (x2,y2) – with each curve being characterized by a particular value of ε (besides a particular function η{x}); the associated increment in y{x}, defined as dy x

y# x − y x ,

11 302

Integrals

will thus read dy x = ε η x ,

11 303

in agreement with Eq. (11.299). An associated increment will be experienced by dy/dx, according to d

dy# x dy x − , dx dx

dy x dx

11 304

using Eq. (11.302) as template – after replacing y{x} by dy{x}/dx (and thus y#{x} by dy#{x}/dx); Eq. (10.106) allows rephrasing of Eq. (11.304) to d

dy x dx

=

d # y x −y x dx

11 305

Insertion of Eq. (11.299) supports transformation of Eq. (11.305) to d

dy x dx

=

d εη x dx



dη x dx

11 306

consistent with Eq. (11.303) – at the expense of Eq. (10.120), since ε is a constant; elimination of d(dy{x})/dx) between Eqs. (11.304) and (11.306) gives then rise to dy# x dy x dη x = +ε dx dx dx

11 307

Once in possession of Eqs. (11.299) and (11.307), one can use Eq. (11.296) as template to write I# y =

x2

f x, y# ,

x1

dy# dx, dx

11 308

describing the new value of I brought about by the small changes in y and dy/dx; the underlying small variation, dI, will thus read I# y − I y ,

dI y

11 309

where insertion of Eqs. (11.296), (11.299), and (11.307) unfolds x2

dI y =

f x, y + ε η x ,

x1

dy dη x +ε dx dx

dx −

x2

f x, y, x1

dy dx dx

11 310

dx,

11 311

– or else x2

dI y =

f x, y + εη x ,

x1

dy dη x +ε dx dx

− f x, y,

dy dx

due to Eq. (11.101). Equation (11.311) may be seen as a bivariate function, i.e. dI

dI y,

dy dx

11 312

417

418

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– since variable x actually vanishes when the integral is calculated between two numerical limits, x1 and x2; therefore, dI may be expanded via Taylor’s series around dI[y] = 0, viz. dI y ≈ dI y

dy y, dx

+

∂ dI y ∂y

dy y, dx

dy +

∂ dI y dy ∂ dx

dy , dx

d

11 313

dy y, dx

where terms of quadratic and higher orders were neglected because dI[y] is supposed to be sufficiently small – because dy as per by Eq. (11.302) and d(dy/dx) as per Eq. (11.304) were also postulated as very small. Using Eq. (11.295) as template, and recalling Eqs. (11.296) and (11.311), one may redo Eq. (11.313) to x2 x2

dI y = x1

dy ∂f x, y, dx dx dy + ∂y

∂f x, y,

x1

dy dx

dy ∂ dx

dx d

dy , dx

11 314

since dI[y] = 0 when both y and dy/dx do not undergo any variation, coupled with both x1 and x2 being independent of either y or dy/dx; Eqs. (11.303) and (11.306) may now be invoked to transform Eq. (11.314) to x2 x2

dI y = x1

dy ∂f x, y, dx εη x dx + ∂y

dη x ε dx

∂f x, y,

x1



dy dx

dy dx

dx,

11 315

where factoring out of εdx, followed by taking ε off the kernel for being constant give rise to x2

dI y = ε

η x1

∂f dη + ∂y dx

∂f dy ∂ dx

dx,

11 316

along with Eq. (11.101). The sign of dI[y] will depend on the sign of ε; for the integral to have a true maximum or minimum, however, a stationary value is to be enforced. This can be brought about by requiring that x2

η x1

∂f dη + ∂y dx

∂f dy ∂ dx

dx = 0;

11 317

Eq. (11.101) supports transformation of Eq. (11.317) to x2

∂f η dx + x1 ∂y x2

x1

dη dx

∂f dx = 0, dy ∂ dx

11 318

Integrals

where the second term may in turn be integrated by parts as per Eq. (11.177) to get x2

∂f η dx + η x1 ∂y x2

∂f dy ∂ dx

x2



η x1

d dx

x1

∂f dy ∂ dx

dx = 0

11 319

Equations (11.300) and (11.301) allow simplification of Eq. (11.319) to x2

η x1

∂f d − ∂y dx

∂f dy ∂ dx

dx = 0

11 320

∂f remains finite (as expected); here Eq. (11.101) was dy ∂ dx taken advantage of once more. Since η{x} was initially postulated to be an arbitrary function, Eq. (11.320) cannot be satisfied in general unless because of the nil η, as long as

∂f d − ∂y dx

∂f dy ∂ dx

=0

11 321

– usually known as Euler and Lagrange’s equation; it provides the necessary condition for the integral in Eq. (11.296) to have a stationary value. Although it is comparatively easy to find the necessary condition for a functional optimum with regard to a function, it is a vastly more difficult problem to formulate a sufficient condition for said stationary point be a maximum or a minimum – so this issue will not be exploited any further. Distinction may, in practice, be done by arbitrating a (usually simple) function passing through the initial and final points, and then checking whether its integral is larger or smaller than the integral of the stationary function at stake – thus conveying a minimum or maximum, respectively. There are several situations where Eq. (11.321) can be simplified. For instance, if f is explicitly independent of y, i.e. f

f x,

dy , dx

11 322

then one promptly realizes that ∂f =0 ∂y

11 323

– so Eq. (11.321) reduces to d dx

∂f dy ∂ dx

= 0,

11 324

419

420

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

which is equivalent to stating ∂f = κ, dy ∂ dx

11 325

provided that κ denotes an arbitrary constant. Another simplification arises when f is explicitly independent of df/dy, i.e. f

f x, y ;

11 326

Eq. (11.326) implies that ∂f = 0, dy ∂ dx

11 327

and thus Eq. (11.321) simplifies to ∂f =0 ∂y

11 328

Finally, consider that f is explicitly independent of x, according to f

f y,

dy ; dx

11 329

the general rule of differentiation of f as a composite function gives, in this particular case, df ∂f dy = + dx ∂y dx

∂f d dy ∂f dy ∂f d 2 y + = , dy dx dx dy dx2 ∂y dx ∂ ∂ dx dx

11 330

which can be rearranged to read dy ∂f df = − dx ∂y dx

∂f d 2 y dy dx2 ∂ dx

11 331

If both sides of Eq. (11.321) are multiplied by dy/dx, one gets dy ∂f dy d − dx ∂y dx dx

∂f dy ∂ dx

= 0,

11 332

where insertion of Eq. (11.331) justifies transformation to df − dx – or else

∂f d 2 y dy d ∂f − 2 dy dx dx dx dy ∂ ∂ dx dx

=0

11 333

Integrals

∂f dy d ∂f − dy dy dx dx ∂ ∂ dx dx

df d dy − dx dx dx

=0

11 334

after recalling the definition of second-order derivative; algebraic rearrangement based on Eq. (10.119) yields df d dy − dx dx dx

∂f dy ∂ dx

= 0,

11 335

which may be further condensed to d dy f− dx dx

∂f dy ∂ dx

=0

11 336

with the aid of Eq. (10.106). Using the same rationale as before, Eq. (11.336) can finally appear as f−

dy dx

∂f =κ dy ∂ dx

11 337

at the expense of an (arbitrary) constant κ. Should restrictions apply, two cases ought to be considered: global restrictions, which limit the solutions over the whole range; and local restrictions, to be imposed at particular points along the path of integration. The former are often integral functions of the type x2

g x, y, x1

dy dx = κ, dx

11 338

where g denotes some algebraic functionality of x, y, and dy/dx distinct from that entertained by f, and κ denotes a constant. Local restrictions look typically like dy = h y, z x dx

,

11 339

where h is a function of y and z – and z is, in turn, a function of x. In the case of global restrictions, definition of a new function, ι, in Lagrangian form as per Eq. (10.512) is in order – according to ι y, λ

I y − λκ,

11 340

where λ denotes an arbitrary constant; one may thus proceed via insertion of Eqs. (11.296) and (11.338) to obtain ι y, λ

x2

f x, y, x1

dy dx− λ dx

x2

g x, y, x1

dy dx dx

11 341

421

422

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

from Eq. (11.340), which may be rewritten as x2

ι y, λ

ϕ x, y,

x1

dy , λ dx dx

11 342

– as long as auxiliary function ϕ abides to ϕ x, y,

dy ,λ dx

f x, y,

dy dy − λg x, y, dx dx

11 343

Except for (parameter) λ, Eq. (11.342) is analogous in functional form to Eq. (11.296) – so Euler and Lagrange’s equation will take a form similar to that of Eq. (11.321), viz. ∂ϕ d − ∂y dx

∂ϕ dy ∂ dx

= 0;

11 344

Eq. (11.344) generates ∂f d − ∂y dx

∂f dy ∂ dx

∂g d − ∂y dx

−λ

∂g dy ∂ dx

=0

11 345

after taking Eq. (11.343) and its derivative, dϕ/dx = df/dx − λ dg/dx, into account, and factoring λ out. For M global restrictions of the form x2

gj x, y, x1

dy dx = κ j ; j = 1,2, …, M, dx

11 346

Eq. (11.340) becomes M

ι y,λ1 , λ2 , …, λj

λj κ j

Iy−

11 347

j=1

– so Eq. (11.345) may be used as template to write ∂f d − ∂y dx

∂f dy ∂ dx

M



λj j=1

∂gj d − ∂y dx

∂gj dy ∂ dx

= 0,

11 348

which obviously reduces to Eq. (11.345) when M = 1. Local restrictions constrain only the shape of the solution function that depends on the independent variable x; a Lagrangian function can still be defined inspired on Eq. (10.512), yet Lagrangian multipliers are now (adjoint) functions of x – abiding to the boundary condition λ x

x2

=0

11 349

Integrals

For an objective function looking like x2

Iy

11 350

f y, z dx x1

in parallel to Eq. (11.296), one may rewrite the local restriction conveyed by Eq. (11.339) as dy − h y, z = 0; dx

11 351

hence, Eq. (11.350) may be reformulated to x2

f y, z −λ x

Iy = x1

dy −h y, z dx

11 352

dx

due to the nil value of λ{x}(dy/dx − h{y,z}) arising, in turn, from the nil right-hand side of Eq. (11.351) – or, equivalently, x2

Iy =

ψ x, y,

x1

dy dx, dx

11 353

provided that auxiliary function ψ{x,y,dy/dx} is defined by ψ x, y,

dy dx

f y, z x

+ λ x h y, z x

−λ x

dy dx

11 354

Therefore, I as per Eq. (11.353) will be optimized when the analogue to Eq. (11.321) applies to ψ rather than f, using two independent variables, y and z, rather than just one, i.e. ∂ψ d − ∂y dx

∂ψ dy ∂ dx

∂ψ d − ∂z dx

∂ψ dz ∂ dx

=0

11 355

= 0;

11 356

and

in view of Eq. (11.354), one may convert Eq. (11.355) to ∂f ∂h d +λ − −λ = 0 ∂y ∂y dx

11 357

that breaks down to ∂f ∂h dλ +λ + =0 ∂y ∂y dx

11 358

423

424

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with Eq. (11.349) associated to dλ/dx, and likewise Eq. (11.356) yields ∂f ∂h +λ =0 ∂z ∂z

11 359

due to lack of explicit dependence of ψ on dz/dx – subjected to y

x1

= y0

11 360

as boundary condition associated with Eq. (11.339).

11.4

Optimization of Set of Derivatives

A more general strategy to optimize continuous functions – which is unfortunately more cumbersome to implement, may be devised that applies to a system described by N state variables yi{x} (i = 1, 2, …, N), with the independent variable, x, ranging from x1 to x2. The evolution of said system is influenced by M decision parameters zj{x} ( j = 1, 2, …, M), together with N differential equations of the form dyi = fi y1 , y2 , …, yN , z1 , z2 , …, zM ; i = 1,2, …,N, dx

11 361

satisfying initial conditions of the type yi

x1

= yi, 0 ; i = 1, 2,…, N

11 362

– which represent local restrictions similar to that labeled as Eq. (11.339); under these circumstances, the criterion to be optimized is expressed via a linear form of the type N

I y1 , y2 , …, yN =

κ i yi i=1

x2

,

11 363

where κi’s denote constants – which is but a discretized form of Eq. (11.352). To solve problems satisfying the above conditions, an additional N Lagrangian functions λi{x} (i = 1, 2, …, N) are to be introduced, as well as a Hamiltonian H satisfying N

H λ1 , λ2 , …, λN , y1 , y2 , …, yN , z1 , z2 , …, zM

λi fi y1 ,y2 , …, yN , z1 ,z2 , …, zM i=1

11 364 – where the λi’s are, in turn, the solutions of differential equations dλi ∂H =− ; i = 1, 2,…, N, ∂yi dx

11 365

subjected to boundary conditions λi

x2

= κ i ; i = 1, 2, …, N;

11 366

Integrals

Pontryagin’s optimum principle states that the optimal decision functions zj’s – for which I has an optimum, are solutions of ∂H = 0; j = 1, 2,…, M ∂zj

11 367

This rationale was originally developed in 1956 by Russian mathematician Lev S. Pontryagin; it essentially entails an algebraic criterion, subjected to differential restrictions, by imposing a constant value to the corresponding Hamiltonian along the best path between bounding states. Euler and Lagrange’s method supporting Eq. (11.321) is a particular case of Pontryagin’s optimum principle, materialized in Eqs. (11.364)–(11.367); to support this claim in the cases of chief physicochemical interest – i.e. when the kernel of the objective function is not explicit on x, one should revisit the typical objective criterion of variational calculus, set forth by Eqs. (11.296) and (11.297), in the equivalent form x2

f2 y1 , z dx

Iy

11 368

x1

and y1

x = x1

= y1, 0 ,

11 369

respectively – where y1 denotes a state variable and z denotes a decision parameter abiding to dy1 ; dx this may be equivalently stated as z

dy1 = f1 z , dx so f1{z} will actually read

11 370

11 371

f1 z = z

11 372

after elimination of dy1/dx between Eqs. (11.370) and (11.371). A second state variable, y2, may now be introduced, viz. x

f2 y1 , z x

y2

dx,

11 373

x1

which resembles Eq. (11.368); differentiation thereof with regard to x yields dy2 = f2 y1 , z x , 11 374 dx in agreement with Eq. (11.159) – where Eqs. (11.133) and (11.373) also imply y2

x = x1

= y2, 0 = 0,

11 375

to serve hereafter as boundary condition. Equation (11.373), together with x2

y2

x = x2

=

f2 dx, x1

11 376

425

426

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

convey obviously the same information as Eq. (11.368) does. Equations (11.370) and (11.369), on the one hand, and Eqs. (11.374) and (11.375), on the other, have the form of Eqs. (11.361) and (11.362), respectively. If one arbitrarily sets κ1 = 0

11 377

κ2 = 1,

11 378

and then Eq. (11.363) will simply look like I y1 ,y2 = κ2 y2

x = x2

;

11 379

hence, the objective criterion as per Eq. (11.368), suitable for variational calculus, can be reformulated to the typical form of Pontryagin’s objective criterion as per Eq. (11.363) – so the reasoning will hereafter proceed using the latter optimization method. The corresponding Hamiltonian should read H = λ 1 f1 + λ2 f2

11 380

as per Eq. (11.364) – although, for convenience, it may be recoined as H = λ1 x z + λ2 x f2 y1 , z

11 381

with the aid of Eq. (11.372); the (adjoint) Lagrangian functions, λ1{x} and λ2{x}, are then to be found from Eq. (11.365), i.e. dλ1 ∂H =− ∂y1 dx

11 382

dλ2 ∂H =− ∂y2 dx

11 383

and

Given the functionalities made explicit in Eq. (11.381), one easily concludes that dλ2 =0 dx from Eq. (11.383), which may be coupled to Eq. (11.366) to yield λ2

x2

= κ2 ;

11 384

11 385

recalling Eq. (11.378), one finds λ2

x2

=1

11 386

from Eq. (11.385). Integration by separation of variables transforms Eq. (11.384) to 1 λ2

x2

dλ2 =

0dx

11 387

x

with the aid of Eq. (11.386); this is equivalent to λ2

1 λ2

= 1 − λ2 = 0 = k −k = k

x2 x

11 388

Integrals

on the grounds laid by Eq. (11.160), with k denoting an arbitrary constant – which degenerates to λ2 = 1,

11 389

so Eq. (11.381) becomes H = λ1 x z + f2 y1 , z

11 390

Combination of Eqs. (11.382) and (11.390) unfolds dλ1 ∂f2 =− , dx ∂y1

11 391

where integration by separation of variables, with the aid of Eqs. (11.366) and (11.377), i.e. λ1

x2

= κ1 = 0,

11 392

permits one to obtain 0 λ1

dλ 1 = −

∂f2 dx ∂y1

x2 x

11 393

Note that ∂f2/∂y1 is a function solely of x as per Eq. (11.391), coupled with the original hypothesis that λ1 ≡ λ1{x} – yet x2 x

∂f2 dx ∂y1

x2 x

∂f2 x dx ∂y1

11 394

is to be satisfied, with x serving as (dummy) variable of integration – distinct from x, which plays the role of lower limit of integration. Equation (11.393) may be rephrased as λ1

0 λ1

= 0 −λ1 =

∂f2 dx, ∂y 1 x2 x

11 395

at the expense of Eqs. (11.104) and (11.160), or else λ1 x = −

∂f2 dx; ∂y 1 x2 x

11 396

hence, the Hamiltonian turns into H = f2 y1 , z −z

∂f2 dx x2 ∂y1 x

11 397

as per Eq. (11.390), upon combination with Eq. (11.396). Finally, the necessary condition for an optimum conveyed by Eq. (11.367) may be applied to Eq. (11.397) to obtain ∂f2 − ∂z

∂f2 dx = 0, ∂y 1 x2 x

11 398

– where a further differentiation of both sides with regard to x leads to d ∂f2 ∂f2 dx = 0, − dx ∂z ∂y1 x dx

11 399

427

428

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

at the expense of Eq. (11.295); in view of Eq. (11.370), one may transform Eq. (11.399) finally to ∂f2 d − ∂y1 dx

∂f2 dy1 ∂ dx

=0

11 400

Equation (11.400) mimics Euler and Lagrange’s equation, see Eq. (11.321), as anticipated from coincidence in functional form between Eqs. (11.296) and (11.368); therefore, Pontryagin’s maximum principle resembles variational calculus subjected to local restrictions.

429

12 Infinite Series and Integrals

Due to their algebraic, analytical, and numerical simplicity, polynomials are often elected for description of (more) involved analytical functions; this is notably the case of Taylor’s series – which (when convergent) provides the exact value of the associated function, should the number of terms tend to infinity. Related concepts encompass Euler’s infinite product, as well as Euler’s original definition of factorial of an integer number – a concept that has been extended to any real number, at the expense of gamma (integral) function.

12.1 Definition and Criteria of Convergence Recalling the concept of infinite series, it is more or less intuitive that i∞= 1 ai cannot converge unless lim ai = 0. In fact, if the n-term series, Sn, is considered, viz. ∞

i

n

ai ,

Sn

12 1

i=1

as well as the corresponding (n − 1)-term series, Sn−1, i.e. n−1

ai ,

Sn− 1

12 2

i=1

then ordered subtraction of Eq. (12.2) from Eq. (12.1) unfolds n− 1

n

Sn − Sn −1 =

ai − i=1

n− 1

ai + an −

ai = i=1

i=1

n −1

ai = an ;

12 3

i=1

if the series converges to sum S as per Eq. (2.73), then lim Sn = lim Sn −1 = lim Sn−1 = S,

n



n− 1



n



12 4

which readily implies lim Sn − Sn −1 = lim Sn − lim Sn− 1 = S − S = 0

n



n



n



Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

12 5

430

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

at the expense of Eq. (9.73). On the other hand, application of limits to both sides of Eq. (12.3) gives rise to lim Sn −Sn− 1 = lim an , ∞

n

so elimination of limn

12 6

n





Sn − Sn− 1 between Eqs. (12.5) and (12.6) finally yields

lim an = 0

12 7



n

Equation (12.7) is a necessary, but not sufficient condition for convergence; for instance, 1 the harmonic series – defined as i∞= 1 , leads to an infinite sum, even though i 1 1 1 =0 lim an = lim = 12 8 n ∞ n ∞n lim n ∞ n



1 in Eq. (12.7), and recalling Eq. (9.108). n When a series consists of only positive terms (i.e. ai ≥ 0 for all i), then it must either converge or diverge to +∞; it clearly cannot oscillate, as might happen when both positive and negative terms are present. Several tests for convergence have accordingly been developed; the most noteworthy examples include comparison and ratio comparison tests, d’Alembert’s ratio test, and Cauchy’s integral test for monotonic series, complemented with Leibnitz’s test for alternating series – all of which will be reviewed next. after setting an ≡

12.1.1

Comparison Test

n i = 1 ai

If and ni= 1 bi denote series of positive terms, and the latter is known to converge to some finite limit Sb, then n

ai = Sa finite

lim



n

0 ≤ a i ≤ bi

i=1

12 9



lim

n



bi = Sb finite n=1

– so convergence of ni= 1 ai toward some finite limit Sa is guaranteed at the expense of convergence of ni= 1 bi , as long as the generic term of the former, ai, never exceeds that of the latter, bi. To prove the above comparison test, one should depart from the very hypothesis 0 ≤ a i ≤ bi ,

12 10

and then proceed to ordered addition of Eq. (12.10) for i = 1, 2, …, n – according to n

n

ai ≤ i=1

bi ;

12 11

i=1

a corollary of Eq. (9.121) guarantees that n n



n

ai ≤ lim

lim

i=1

n



bi = Sb finite, i=1

12 12

Infinite Series and Integrals

based on Eq. (12.11), together with the hypothesis that ni= 1 bi converges to (finite) Sb – so i∞= 1 ai is upper bounded by Sb. On the other hand, the first inequality in Eq. (12.10) implies that ni= 1 ai cannot decrease when n increases, so n

lim

n



ai = Sa

12 13

i=1

is the only compatible situation, thus retrieving Eq. (12.9); whereas combination of Eqs. (12.12) and (12.13) further supports Sa ≤ Sb finite

12 14

– which indicates that not only ni= 1 ai converges, but also the limit at stake does not exceed Sb, so it must be finite as stated in Eq. (12.9).

12.1.2

Ratio Test

Consider again two series of positive terms, verges to finite Sb – besides

n i = 1 ai

and

n i = 1 bi , such that the latter con-

a i + 1 bi + 1 ≤ ai bi

12 15

applying to every i; it can be proven that n

ai = Sa finite

lim

n



ai ≥ 0, bi ≥ 0 ai + 1 bi + 1 ≤ ai bi

i=1

,

12 16



lim

n



bi = Sb finite n=1

known as ratio test. In attempts to derive the above theorem, one may first rewrite ai and bi as product of consecutive ratios, according to ai =

ai ai−1 ai− 2 ai− 1 ai−2 ai− 3

a2 a1

12 17

bi =

bi bi−1 bi−2 bi−1 bi−2 bi−3

b2 , b1

12 18

and

respectively; in view of Eq. (12.15), ai/ai−1 ≤ bi/bi−1, ai−1/ai−2 ≤ bi−1/bi−2, …, a2/a1 ≤ b2/b1, so one realizes that ai ai ai− 1 ai−2 = a1 ai−1 ai− 2 ai−3

a2 bi bi−1 bi −2 ≤ a1 bi−1 bi−2 bi −3

b2 bi = b1 b 1

12 19

upon combination of Eqs. (12.17) and (12.18) – or, equivalently, ai ≤

a1 bi b1

12 20

431

432

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

after multiplying both sides by a1 > 0. Since Eq. (12.20) consubstantiates one of the hypotheses set forth by the comparison test, Eq. (12.9) will assure that ni= 1 ai converges when n a1 bi converges; in view of Eq. (9.92), coupled with lim ni= 1 bi = Sb also utilized as i=1 n ∞ b1 hypothesis here, one indeed realizes that n

n a1 a1 a1 bi = lim bi = Sb finite ∞ n ∞ b b b1 1 1 i=1 i=1

12 21

lim

n

– so ni= 1 ai must tend to a finite Sa. Therefore, Eq. (12.16) becomes proven in full – while Eqs. (12.13) and (12.14) support Sa ≤

a1 Sb b1

12 22

when combined with Eq. (12.21); this further indicates that the limit of exceed a1Sb/b1.

12.1.3

n i = 1 ai

cannot

D’Alembert’s Test

D’Alembert’s criterion claims that the series of positive terms ni= 1 ai converges whenai + 1 exists and lies below unity, i.e. ever the limit of the series ni= 1 ai n

ai = S finite

lim

n



i=1

n

ai + 1 |am + 1| – as per Eq. (12.50).

12.2 Taylor’s Series If f {x} is a continuous, single-valued function of variable x, with continuous derivatives df {x}/dx, d2f {x}/dx2, …, dif {x}/dxi, …, dnf {x}dxn in a given interval [a,b], and if dn+1f {x}/dxn+1 exists in ]a,b[, then Taylor’s theorem states that n

f x =f x

a

+

i=1

di f x dx i

a

x− a i d n + 1 f x + i dx n + 1

ζ

x− a n + 1 ; a < ζ < x x < ζ < a; n+1 12 57

437

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

the last term in the right-hand side represents Lagrange’s remainder. The first version of Eq. (12.57) was stated in 1712 by Brook Taylor, an English mathematician – although the expression for the associated error was not provided until much later by Lagrange. Note that the left-hand side and the first term in the right-hand side of Eq. (12.57) are to be identical at x = a − so all n + 1 differential terms must be nil at x = a (owing to the constitutive, positive integer power of x − a). Furthermore, all derivatives of both sides, up to order n, are equal at x = a, so n df df d if = + dx dx a i = 2 dx i 2

n

2

a

i

d f d f df 2 = 2 + dx dx a i = 3 dx i

a

x− a i− 1 i− 1 x−a i− 2

a

i−2

12 58

a

… d nf d nf = dx n dx n

a

a

all lead to universal conditions – since x − a = 0 (when satisfied) turns every summation to zero; therefore, it is expected that the right-hand side describes the behavior of f {x} reasonably well in the vicinity of x = a. Taylor’s polynomials provide a convenient way to describe the local pattern of a function, by encapsulating its first several derivatives at a given point – while taking advantage of the fact that the derivatives of a function at some point dictate its behavior at nearby points; this is illustrated in Fig. 12.2 for the sine function. Upon inspection of these plots, one finds that Taylor’s polynomial becomes a better and better approximation to the original function as the number of terms, n, increases; and that convergence to the true function requires fewer terms when the argument is closer to the expansion point.

1 0

–5

5

1

x

5

5

1 0

x

0

5

5

x

(g)

–5

1 –5

x

(f) f{x}, P6{x}

(e)

0

0

x

x

–5

-5

f{x}, P4{x}

5

f{x}, P7{x}

0

(d)

(h) 1 –5

0

x

5

f{x}, P8{x}

1 –5

(c) f{x}, P3{x}

(b) f{x}, P2{x}

f{x}, P1{x}

(a)

f{x}, P5{x}

438

1 –5

0

5

x

__

Figure 12.2 Variation of sine function, f {x} (___), and Taylor’s polynomial approximants, Pn{x} ( )– expanded around zero, and characterized by degree (a) 1, (b) 2, (c) 3, (d) 4, (e) 5, (f ) 6, (g) 7, and (h) 8, as a function of its argument, x.

Infinite Series and Integrals

Lagrange’s remainder in Eq. (12.57) may be rewritten as n+1

d n + 1f x dx n + 1

Rn x

ζ

x− a n + 1 d n + 1 f x = n+1 dx n + 1

x−a i=1 ζ

n+1 ,

n+1

d n + 1f x dx n + 1

=

12 59

x− ai i=1 ζ

n+1

a1 = a2 =

= an + 1 = a

n+1 i=1

x− ai is the expression for the error of interpolation between funcn+1 tion f {x} and an nth-degree interpolation polynomial with nodes ai (to be derived later); a special case thereof was worked out in Eq. (7.210) for n = 1 – in which case one would get knowing that

d2 f x dx2

x −a 2

ζ

2

=

1 x− a1 x −a2 2

d2 f x 2 a1 = a2 = a dx

12 60 ζ

after setting a1 = xi and a2 = b0; the maximum value for said remainder (and thus for d2 f x ) will occur at some point located between x and a1 = a2. dx2 ζ To prove Eq. (12.57), often referred to as the fundamental theorem of differential calculus, one should resort to Rolle’s theorem and apply it to a function f {x} that is differentiable n + 1 times, such that f x

a

=

df x dx

= a

=

d nf x dx n

=0

12 61

a

besides f x

x0 a

= 0;

12 62

Eq. (10.269) guarantees indeed that there is some b1, comprised between a and x0, satisfying df x dx owing to f x

= 0,

12 63

b1

a

=f x

x0

as per Eqs. (12.61) and (12.62) by hypothesis. Since

df x dx

=0 a

df x = 0 as per Eq. (12.63), the very same theorem assures that dx b1 ]a,b1[ such that

after Eq. (12.61) and there is some b2 d2 f x dx2

= 0; b2

12 64

439

440

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

this reasoning may be carried out sequentially, thus allowing a sequence of values b1, b2, b3, …, bn be constructed – so that a value for b ≡ bn + 1, abiding to a < b < bn < x0, will eventually be found that satisfies d n + 1f x dx n + 1

= 0,

12 65

b

irrespective of the actual value of n. Denote now as Pn{x} the nth degree Taylor’s polynomial, centered at x = a, that approximates f {x} and abides to d i Pn x d if x = i dx dx i

; i = 0,1, …, n,

12 66

a

in agreement with Eq. (12.58); hence, auxiliary function g{x}, defined as f x − Pn x ,

g x

12 67

satisfies g x

=f x

a

a

− Pn x

a

=0

12 68

– and sequential differentiation of both sides of Eq. (12.67) further leads to dg x dx

=

=

a

d ng x dx n

= 0,

12 69

a

in parallel to Eq. (12.58) and based on the result conveyed by Eq. (12.66). Consider now an arbitrary value ξ of x, other than a; the error of approximation of Pn{ξ} relative to f {ξ} will accordingly be given by g{ξ}, in agreement with Eq. (12.67). Denote now as β the unique constant described by g ξ = − β ξ− a

n+1

12 70

or, equivalently, β= −

g ξ ξ−a n+1

12 71

after isolating β. If a second auxiliary function h{x} is defined as g x + β x− a

h x

n+1

,

12 72

then it satisfies conditions analogous to those described by Eq. (12.61), i.e. h x

a

=

dh x dx

= a

=

d nh x dx n

= 0,

12 73

a

following differentiation of both sides of Eq. (12.72) and setting of x = a afterward; one also finds h ξ

g ξ + β ξ− a

n+1

=g ξ −

g ξ ξ −a

n+1

ξ −a

n+1

= g ξ −g ξ = 0 = h x

a

12 74

Infinite Series and Integrals

based on Eqs. (12.71) and (12.72) upon setting x = ξ. Under these circumstances, Eq. (12.65) guarantees that d n + 1h x dx n + 1

=0

12 75

ζ ξ

with a < ζ < ξ – because the behavior of h{x} and derivatives up to order n, at x = a, is analogous to the behavior of f {x} and derivatives up to order n, also at x = a. On the other hand, insertion of Eq. (12.67) in Eq. (12.72) unfolds h x = f x − Pn x + β x− a

n+1

,

12 76

where sequential differentiation of both sides with regard to x gives rise to dh x df x dP n x = − + n + 1 β x− a dx dx dx =

dg x + n + 1 β x− a dx

n

n

d2 h x d 2 f x d 2 Pn x = − + n + 1 nβ x −a 2 dx dx2 dx2 =

d2 g x + n + 1 nβ x− a dx2

n −1

,

d nh x d n f x d n Pn x = − + n + 1 n n− 1 n dx dx n dx n =

n−1

d ng x + n + 1 n n− 1 dx n

12 77

2β x− a

2β x− a

d n + 1h x d n + 1 f x d n + 1 Pn x = − + n + 1 n n− 1 n+1 dx dx n + 1 dx n + 1



with the aid of Eq. (12.67) and the rule of differentiation of an algebraic sum of functions; in view of Eq. (12.69), one may revisit Eq. (12.77) as dh x dx d2 h x dx2 d nh x dx n

= n + 1 β x−a

n

a

= n + 1 nβ x− a a

… = n + 1 n n− 1

a

=0

n− 1 a

=0 ,

2β x−a

a

d n + 1h x d n + 1f x = + n + 1 n n −1 dx n + 1 dx n + 1

a

=0 1β

12 78

441

442

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

d n + 1 Pn x = 0 since there is no (n + 1)th term in Pn{x} by hypothesis. The last dx n + 1 result in Eq. (12.78) is of particular relevance, and may be recoined as

where

d n + 1h x d n + 1f x = +β n+1 ; n+1 dx dx n + 1

12 79

specifically at x = ζ, Eq. (12.79) looks like d n + 1h x dx n + 1

= ζ

d n + 1f x dx n + 1

ζ

+β n+1 ,

12 80

or else d n + 1f x dx n + 1 β= − n+1

ζ

12 81

– once Eq. (12.75) is taken into account, and β isolated afterward. Elimination of β between Eqs. (12.71) and (12.81) produces d n + 1f x dx n + 1 − n+1

ζ

=−

g ξ , ξ −a n + 1

12 82

and further combination with Eq. (12.67) yields d n + 1f x dx n + 1 n+1

ζ

=

f ξ −Pn ξ ξ−a n+1

12 83

after taking negatives of both sides; isolation of f {ξ} leads finally to d n + 1f x dx n + 1 f ξ = Pn ξ + n+1

ζ

ξ− a

n+1

,

12 84

which retrieves Eq. (12.57) following replacement of (dummy) variable ξ by x, and consequently enforcing a < ζ < x – besides realizing that Pn{x} is, by definition, given by i x− a i n d f x . In view of the above, one concludes that Taylor’s theorem is i=0 i dx a i but a consequence of Rolle’s theorem. When said remainder satisfies the condition d n + 1f x ∞ dx n + 1

lim

n

ζ

x −a n + 1 = 0, n+1

12 85

then Eq. (12.57) may alternatively be formulated as ∞

f x =f x

+ a

d if x i i = 1 dx

a

x− a i ; i

12 86

Infinite Series and Integrals

Eq. (12.86) is known as Taylor’s series – although it is often referred to as MacLaurin’s series when a = 0. On the other hand, if n is deliberately set equal to zero, then Eq. (12.57) simplifies to f x =f x

a

+

x −a ζ 1

df x dx

12 87

as the summation itself vanishes for i = 1 > 0 = n; this is equivalent to writing f x −f x

df x dx

=

a

12 88

x− a

ζ

If x ≡ b and ζ ≡ c, then Eq. (12.88) becomes identical to Lagrange’s theorem, labeled as Eq. (10.274) – thus justifying the designation of said remainder. Taylor’s series only represents f {x} within [a,b] if this interval is included in the overall interval of convergence of the former; hence, one should previously investigate which such intervals are compatible with Eq. (12.85), before claiming validity of Taylor’s approximations. If all terms of said series are positive, then one may resort to the ratio test using a geometric series of ratio 0 < h < 1 as reference; Eq. (2.93) permits one to write n

n

lim

n



n

i=0



1 − hn + 1 a0 = lim 1 − h n + 1 , ∞ 1−h 1−hn ∞

a0 h i = a0 lim

lim

ai

n

i=0

12 89

where h < 1 allows simplification to n

lim

n



ai = i=0

a0 1−h

12 90 0 0. It is instructive, at this stage, to recall the mode of calculation of the combinatorial coefficients of Newton’s expansion; in view of the definition of factorial, one may rewrite Eq. (2.240) as

Infinite Series and Integrals k −1

n

n n = = k k n−k

n −i

n−i i=0 n− k

=

i=0

n− i i=k

12 155

n

n −k −i

k

n

n −i

k i=k

i=0

after splitting the extended product in numerator, whereas cancellation of common factors between numerator and denominator permits simplification to k −1

n−i

n = k

i=0

12 156

k For large values of n, one finds that k −1

n k

n−i

k −1

n 1 1 nk 1 i=0 k lim lim k = 12 157 = = k k k n ∞ n n ∞ k n ∞ n k n ∞n k n based on Eq. (12.156) – because i ≤ k − 1, so n >> k – 1 ≥ i as n grows unbounded; hence, comparative inspection of Eqs. (12.156) and (12.157) unfolds lim

i=0

= lim

n k 1 = lim 12 158 k n ∞ nk The result conveyed by Eq. (12.158) may be used to transform Eq. (12.150) to ∞

e=

∞ 1 n = lim i ∞ ni n ∞ i=0

lim

i=0

n

1 i n− i 1 , n

n i

12 159

along with exchange of limit and summation signs due to their linearity; according to Newton’s binomial formula as per Eq. (2.236), one may reformulate Eq. (12.159) to e = lim n



1+

1 n

n

12 160

– thus providing the classical definition of Neper’s constant, e; it is so-called in honor to John Napier, a Scottish mathematician and physicist, who first published its value in 1618. In the case of a general (positive real) number x ≥ 1, one readily realizes that 1 ≤ n ≤ x ≤ n + 1,

12 161

i.e. x is always comprised between two consecutive integer numbers, n and n + 1; after taking reciprocals of both sides, Eq. (12.161) becomes 1 1 1 ≥ ≥ , 12 162 n x n+1 whereas addition of unity to all sides unfolds 1 1 1 1+ ≥1+ ≥1+ 12 163 n x n+1 Due to the relative positioning enforced by Eq. (12.161), one also concludes from Eq. (12.163) that 1+

1 n

n+1

≥ 1+

1 x

n+1

≥ 1+

1 x

x

≥ 1+

1 n+1

x

≥ 1+

1 n , n+1

12 164

453

454

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

or else 1 n+1 1 1 1 n+1 ; 1+ ≥ 1+ ≥ 1+ 1 n n x 1+ n+1 when limits are taken encompassing very large n, one necessarily obtains n

1+

x

12 165

1 n+1 1 1 1 n+1 1+ ≥ lim 1 + ≥ lim 12 166 lim 1 + 1 n ∞ x ∞ n + 1 ∞ n n x 1+ n+1 Straight application of Eqs. (9.87) and (9.94) supports transformation of Eq. (12.166) to n

lim

1+



n

1 n

lim

1+



n

1+

x

1 n

n

≥ lim

1+



x

1 x

lim

x



n+1

lim

n+1

while

1 n

0 and

lim

n



1+

1 n+1 1 n

≥ lim

1+



x

1 x

x



lim

n+1



1+

Eq. (12.160) may now be retrieved, for both n e ≥ lim x



1+

1 x



12 167

∞ allow application of Eq. (9.47) to get

0 when n

n



1 n+1 n+1 , 1 1+ n+1

1+

1 n+1

n+1

;

∞ and n + 1

12 168 ∞, thus generating

x

≥e

12 169

In view of Eq. (9.121), one finally concludes that lim

x



1+

1 x

x

=e,

12 170

so the definition of Neper’s number may be obtained using real numbers, well beyond integers as (misleadingly) suggested by Eq. (12.160) – where x ≥ 1 as departing hypothesis proves immaterial, because only large values of x are of interest. One may as well calculate a related limit as x α α x α α = lim 1+ 12 171 lim 1 + x ∞ x ∞ x x for constant α, via composing a power with the corresponding root, or else α lim 1 + x ∞ x

x

=

lim

x α



1 1+ x α

x α α

,

12 172

upon taking the reciprocal of the reciprocal, while recalling Eq. (9.108) – and since x ∞ when x ∞; the result conveyed by Eq. (12.170) then supports α

Infinite Series and Integrals

lim

x



1+

x

α x

= eα,

12 173

meaning that replacement of x by x/α in Eq. (12.170) causes a change in the exponent of e from 1 to α. A function that resembles an exponential involves its integration – and is materialized in Gauss’ error function, defined as 2 π

erf x

x

e − ζ dζ; 2

12 174

0

in view of Eq. (12.86), one may expand erf {x} about x = 0 as ∞

erf x = erf x

x=0

+

d i erf x dx i i=1

xi , i

x=0

12 175

further to realization that erf x

0

2 π

x=0

e − ζ dζ = 2

0

2 0=0 π

12 176

– stemming from Eq. (12.174), with the aid of Eq. (11.133). One may calculate the first derivative in Eq. (12.175) as d erf x dx

2 d π dx

x

e − ζ dζ = 2

0

2 − ζ2 e π

dx 2 − x2 = e ζ = x dx π

12 177

at the expense of Eqs. (11.295) and (12.174); for x = 0, Eq. (12.177) breaks down to 1 d erf x 1 dx

= x=0

2 − x2 e π

x=0

=

2 0 2 1 e = π π 01

12 178

The second derivative ensues from Eqs. (10.120), (10.169), (10.205), and (12.177), viz. d 2 erf x dx2

d derf x dx dx

2 de − x 2 2 = − 2x e − x , π dx π 2

=

12 179

or else 1 d 2 erf x 2 dx2

= x=0

1 2 2 −2x e − x 2 π

x=0

=

2 1 0 0e = 0 π2

12 180

after setting x = 0 and dividing both sides by 2!. For the third derivative, one may resort to Eq. (12.179) to write d 3 erf x dx3

d d 2 erf x dx dx2

=

2 2 2 − 2e − x −2xe − x − 2x π

=

2 2 − 2 1 − 2x2 e − x π 12 181

– with the further aid of Eq. (10.119), together with factoring out of −2e − x , thus implying 2

455

456

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

1 d 3 erf x 3 dx3

= x=0

1 2 2 − 2 1 −2x2 e − x 3 π

x=0

2 1 2 1 1 − 0 e0 = − π3 π 13

=−

12 182 after division of both sides by 3! and setting x = 0. The fourth derivative looks like d 4 erf x dx4

d d3 erf x dx dx3

=

2 2 2 − 2 −4xe − x + 1 − 2x2 e − x − 2x π

2 2 2 2 −2 − 4x− 2x + 4x3 e − x = − 4 x − 3 + 2x2 e − x π π

=

12 183 based on Eq. (12.181), after condensing terms alike in parenthesis and factoring out 2 2xe − x – where division of both sides by 4! and setting of x = 0 permit simplification to 1 d 4 erf x 4 dx4

= x=0

1 2 2 −4 x −3 + 2x2 e − x 4 π

x=0

=

2 1 − 4 0 −3 + 0 e0 = 0 π4 12 184

In the case of the fifth derivative, one obtains d 5 erf x dx5

d d4 erf x dx dx4 =

=

2 −4 π

−3 + 2x2 e − x + x 4x e − x + x − 3 + 2x2 e − x − 2x 2

2

2

2 2 2 2 − 4 − 3 + 6x2 + 6x2 − 4x4 e − x = −4 −3 + 12x2 −4x4 e − x π π

12 185 based on Eq. (12.183), and following similar steps of algebraic manipulation – where setting of x = 0, dividing both sides by 5!, and multiplying and dividing by 2! support reduction to 1 d5 erf x 5 dx5

= x=0

=

1 2 2 − 4 −3 + 12x2 −4x4 e − x 5 π

x=0

2 1 2 1 2 2 1 −4 − 3 + 0 −0 e0 = 43 = π5 π 54 2 π 25

,

12 186

with the aid of the definition of factorial. Departing from Eq. (12.185), one can obtain the sixth derivative as 2 24x− 16x3 e − x d 6 erf x d d5 erf x 2 = −4 2 dx π dx6 dx5 + −3 + 12x2 −4x4 e − x −2x =

2 2 −4 24x −16x3 + 6x− 24x3 + 8x5 e − x π

=

2 2 −4 2x 15 −20x2 + 4x4 e − x π

,

12 187 again resorting to the classical rules of differentiation, lumping terms alike, and factoring out whenever possible – thus eventually leading to

Infinite Series and Integrals

1 d6 erf x 6 dx6

= x=0

1 2 2 − 4 2x 15 −20x2 + 4x4 e − x 6 π

x=0

12 188

2 1 = −4 0 15− 0 + 0 e0 = 0 π6

upon dividing both sides by 6! and setting x = 0. Finally, the seventh derivative may be reached starting from Eq. (12.187), viz. 2 15− 20x2 + 4x4 e − x

2

d 7 erf x dx7

d d 6 erf x dx dx6

=

2 −4 π

+ 2x − 40x + 16x3 e − x

2

+ 2x 15 − 20x2 + 4x4 e − x −2x 2

=

2 2 − 4 2 − 40x2 + 16x4 −30x2 + 40x4 − 8x6 e − x π

=

2 − 4 2 15−90x2 + 60x4 − 8x6 π 12 189

with calculation of the indicated derivatives via Eqs. (10.119), (10.169) and (10.205), complemented by straightforward algebraic rearrangement – where replacement of x by 0, and division of both sides by 7! eventually yield 1 d 7 erf x 7 dx7

1 2 2 − 4 2 15− 90x2 + 60x4 − 8x6 e − x x=0 7 π 2 1 , − 4 2 15 −0 + 0 − 0 e0 = π7 2 1 2 1 6 7 2 1 =− = − 4 2 15 = − 4253 3 7 π7 π7 π 37 =

x=0

12 190

with 15 split as product of 5 by 3, and numerator and denominator multiplied by 7 and 6 = 3! for convenience. The above reasoning may be carried out over and over again – so that insertion of all results conveyed by Eqs. (12.176), (12.178), (12.180), (12.182), (12.184), (12.186), (12.188), and (12.190) will transform Eq. (12.175) to ∞ 2 x2i + 1 erf x = 0 + ; 12 191 −1 i 2i + 1 i π i=0 Eq. (12.191) breaks down to erf x =

2 ∞ −1 π i=1

i

x2i + 1 , 2i + 1 i

12 192

after factoring 2 π out. Convergence of the series in Eq. (12.192) tends to be faster than the exponential function itself – due to the cumulative sum underlying the integral as per Eq. (12.174); it should be stressed that Eq. (12.192) is the form used in practice to ascertain Gauss’ error function, in the first place.

457

458

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

12.2.1.2 Hyperbolic Functions

Note that Eq. (12.149) may be rewritten as ∞

ex =

x2i + 2i

i=0



x2i + 1 , 2i + 1

i=0

12 193

after splitting the original summation as sets of even- or odd-powered terms of x. On the other hand, Eq. (12.149) is valid for every x in view of Eqs. (12.151) and (12.152) – irrespective of sign or magnitude, so one may exchange x by −x in Eq. (12.149) to get e−x =

∞ i=0

−x i = i



−1 i x i ; i

i=0

12 194

a similar splitting of terms then unfolds e−x =

∞ i=0



−1 2i x2i + 2i

i=0

− 1 2i + 1 x2i + 1 = 2i + 1

∞ i=0

x2i − 2i

∞ i=0

x2i + 1 2i + 1

i

– after realizing that − 1 2i = − 1 2 = 1 and − 1 2i + 1 = −1 − 1 Ordered subtraction of Eq. (12.195) from Eq. (12.193) produces e x − e −x =

∞ i=0



x2i + 2i

x2i + 1 − 2i + 1

i=0

∞ i=0

x2i + 2i

∞ i=0

2 i

12 195 = − 1, for every i.

x2i + 1 , 2i + 1

12 196

and summations of the same time may be further grouped to generate ∞

e x − e −x = 2

i=0

x2i + 1 ; 2i + 1

12 197

Eq. (12.197) finally becomes sinh x

e x −e −x = 2



x2i + 1 , 2i + 1

i=0

12 198

upon division of both sides by 2 and recalling Eq. (2.472) – where the odd nature of x2i+1 accounts for the odd nature of sinh x highlighted in Eq. (2.475). By the same token, ordered addition of Eqs. (12.193) and (12.195) gives rise to ex + e − x =

∞ i=0

x2i + 2i

∞ i=0

x2i + 1 + 2i + 1

∞ i=0

x2i − 2i

∞ i=0

x2i + 1 , 2i + 1

12 199

where the second and last term in the right-hand side cancel out to yield ex + e − x = 2

∞ i=0

x2i , 2i

12 200

along with lumping of the remaining summations; a final division of both sides by 2 gives rise to cosh x

e x + e −x = 2

∞ i=0

x2i 2i

12 201

Infinite Series and Integrals

– in full agreement with the definition conveyed by Eq. (2.473), and where x2i being even guarantees that cosh x is also an even function as per Eq. (2.474). Since expansion of ex via Taylor’s series exhibits an infinite radius of convergence, the same can be claimed for e−x and for any linear combination thereof – as is the case of either hyperbolic sine or cosine. With regard to Lagrange’s remainder, upon truncation after n terms following Eq. (12.57), one finds that d n + 1 sinhx dx n + 1

Rn x

x− 0 ζ

2 n+1 +1

= cosh x

2 n+1 +1

ζ

x2n + 3 x 2n + 3 ≤ cosh ζ 2n + 3 2n + 3

x

x 2n + 3 cosh x 2n + 3

=

12 202 for the hyperbolic sine conveyed by Eq. (12.198) with n even – obtained with the aid of Eq. (10.109), coupled to the realization that cosh x > cosh ζ > cosh 0 = 1 for every ζ [x, 0] or ζ [0, x], irrespective of x, as per Fig. 2.14a. When n is odd, Eqs. (10.111) and (12.57) support Rn x = sinh x

x2n + 3 x 2n + 3 ≤ sinh ζ 2n + 3 2n + 3

ζ

x

=

x 2n + 3 sinh x 2n + 3

in view of sinh x > sinh ζ > sinh 0 = 0 for every ζ [x, 0] or ζ chosen – again apparent in Fig. 2.14a. One similarly obtains d n + 1 cosh x dx n + 1

Rn x

x −0

2 n+1

12 204 2n + 2

2n + 2

= sinhx

x x ≤ 2n + 2 2n + 2

ζ

[0, x], whatever x is

2 n+1

ζ

12 203

2n + 2

sinhζ

x

=

x

sinh x 2n + 2

pertaining to the hyperbolic cosine with n even, and likewise Rn x = cosh x

ζ

x2n + 2 x 2n + 2 x2n + 2 cosh x ≤ cosh ζ = x 2n + 2 2n + 2 2n + 2

12 205

when n is odd – once again for every value of x, irrespective of magnitude or sign, and following similar arguments with regard to absolute value and location of ζ. 12.2.1.3 Logarithmic Function

A MacLaurin’s series for the logarithm does not exist, since ln 0+ = −∞; therefore, one normally resorts to Taylor’s expansion around unity, viz. ∞

ln x = ln x

x=1

+

d i ln x i i = 1 dx

x=1

x− 1 i

i

12 206

stemming from Eq. (12.86), where a = 1 and f {x} = ln x. Calculation of the derivatives in Eq. (12.206) unfolds ∞

ln x = ln 1 + i=1

−1

i+1

xi

i− 1

x− 1 i , x = 1 i i− 1

12 207

459

460

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with the aid of Eqs. (10.40) and (10.126), coupled with splitting of i! as product of i by (i−1)!; replacement of x by unity, followed by cancellation of (i − 1)! between numerator and denominator under the summation sign support formal simplification to ∞

ln x = i=1

−1 i+1 x− 1 i

i

12 208

The major drawback of Eq. (12.208) is the need to calculate powers of a binomial containing x, rather than of x itself. An alternative (and much more frequent) approach entails expansion of logarithm of x + 1 via MacLaurin’s series, according to ∞

ln x + 1 = ln x + 1

x=0

+ i=1

d i ln x + 1 dx i

xi , x=0 i

12 209

xi x = 0 i i −1

12 210

which may be transformed to ∞

−1

ln x + 1 = ln 0 + 1 +

i+1

i− 1

x+1

i=1

i

after applying the rule of differentiation of a logarithm, and then of the reciprocal of a function; algebraic simplification becomes possible after replacing x by 0 where indicated, i.e. ∞

ln x + 1 = ln 1 + i=1

− 1 i + 1 xi , i 1i

12 211

or else ∞

ln x + 1 = i=1

−1 i+1 i x i

12 212

When x > 0, the terms of the series in Eq. (12.212) are alternately positive and negative due (only) to the sign effect of (−1)i+1 – so one should resort to Eq. (12.95) to write

lim

d i + 1 ln x + 1 dx i + 1



i

x −0

−1

i+1

x=0

= lim

i+1

i

x+1

= lim



i

i+1



xi + 1 x=0

i+1 −1

i

i+2

i+2

i

1i + 1 i+1

x

,

12 213

i+1

xi + 1 ∞i+1

= lim i

based on Eq. (2.3) and the definition of factorial; inspection of the functional form of Eq. (12.213) indicates that convergence can only be assured when x < 1 – in which case xi + 1 0 and 11 i 0 as i ∞, i.e.

lim

i



d i + 1 ln x + 1 dx i + 1

x−0 x=0

i+1

i+1

,

=0 0 0, or b < n + 1 and x < 0, because (1 + x)b−n−1 > (1 + 0)b−n−1 = 1 under such circumstances – or by 0 when b < n + 1 and x > 0, or b > n + 1 and x < 0, since 1 = 1b−n−1 = (1 + 0)b−n−1 > (1 + x)b−n−1 in that case.

Infinite Series and Integrals

12.2.2

Euler’s Infinite Product

Remember Eq. (2.565) relating sine to hyperbolic sine, i.e. sin x =

eιx− e − ιx , 2ι

12 327

revisited with the aid of Eq. (2.472); in view of Neper’s definition of exponential as per Eqs. (12.160) and (12.173), one may transform Eq. (12.327) to sin x =

1 lim 2ι n ∞

1+

n

ιx n − ιx − 1+ n n

12 328

or, upon multiplication of both numerator and denominator by ι, sin x = −

ι lim 2n ∞

1+

ιx n ιx − 1− n n

n

12 329

– since ι2 = −1. Newton’s expansion of the nth power of each binomial in Eq. (12.329) is now in order, according to sin x = −

n

ι lim 2n ∞

k =0 n

ι = − lim 2n ∞

k =0

n k

ιx k n n − k n k =0

n k

ιx k n − −1 n k =0



ιx n

k

12 330 k

n k

ιx n

k

in view of Eq. (12.318) – where the summations may be collapsed to give sin x = −

n ι lim 1 − −1 2n ∞ k =0

n k x ι k n

k

k

12 331

Inspection of Eq. (12.331) indicates that the real terms in parenthesis under the summation, corresponding to even powers of −1, cancel with each other, while they add up to 2 for odd k; under such circumstances, ιk = ι or ιk = − ι. After factoring −2ι out, Eq. (12.331) becomes sin x = −

n ι − 2ι lim −1 n ∞ 2 k =0

k −1

n 2k + 1

n 2k + 1

x n

x n

2k + 1

,

12 332

or else n

sin x = − lim n



−1

k +1

k =0

2k + 1

again at the expense of ι2 = −1 and − 1 k + 1 = − 1 k − 1 − 1 2 = − 1 convenience −1 and x/n may finally be factored out as sin x =

n x lim −1 nn ∞ k =0

k

n 2k + 1

x n

2k

12 333 k −1

1 = −1

k −1

; for

12 334

479

480

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

or, equivalently, sin x =

n x lim −1 nn ∞ k =0

k

x n

n 2k + 1

2 k

12 335

− thus leaving an (endless) polynomial in (x/n)2 in the right-hand side. The 2n + 1 roots of the left-hand side of Eq. (12.335), i.e. abiding to 12 336

sin x = 0, are given by x = kπ; k = − n, − n + 1, …, − 2, − 1,0, 1,2, …, n− 1,n

12 337

owing to the periodic nature of the sine function as per Eq. (2.292) – which may be rewritten as x kπ = , n n

12 338

after dividing both sides by n; the polynomial in the right-hand side of Eq. (12.335) can, in turn, be factored out as x n −1 nk =0

k

n 2k + 1

x n

2 k

x −0 n

=C

x −0 n

=C

n k =1 n k =1

n

x kπ − − n n x kπ + n n

k =1 n

k =1

x kπ − n n

x kπ − n n 12 339

in general agreement with Eq. (2.182) – with C denoting a constant given by

n 1

= n,

x x preceding the two extended products was rewritten as −0 to highlight n n the root corresponding to k = 0 in Eq. (12.337). Each set of (significant) positive and negative values of k in Eq. (12.339) may (for convenience) be lumped as

and where

x n −1 nk =0

k

n 2k + 1

x n

2 k

=C

x n kπ x + nk =1 n n

= C −1 =A

nx

n

nk =1

−1

kπ x + n n

x n kπ x + nk =1 n n

kπ x − n n kπ x − , n n

12 340

kπ x − n n

with the aid of a new constant A ≡ (−1)nC; Eq. (12.340) degenerates to x n −1 nk =0

k

n 2k + 1

x n

2 k

=A

x n nk =1

kπ n

2



x n

2

12 341

Infinite Series and Integrals

in view of the product of two conjugate binomials – with dependence on (x/n)2 now appearing consistently in both sides. In the seventeenth century, Euler proposed an elegant (yet intuitive) replacement of (kπ/n)2 by an alternative expression, viz. kπ n

2

2kπ n ≈ 2kπ 1+ cos n 1− cos

12 342

or else 1−cos 2ξ , 1+ cos 2ξ

12 343

kπ ; n

12 344

ξ2 ≈ as long as ξ

despite his lack of justification to do so – except that exact validity thereof requires a large n, the introduction of the concept of limit by Cauchy in the nineteenth century eventually accounted for such a step. In fact, after realizing that kπ =0 ∞

lim ξ =

n



12 345

as per Eq. (12.344), one finds that lim ξ2 = lim ξ2 =

ξ

0

n



lim ξ

n



2

= 02 = 0

12 346

based on Eqs. (9.93) and (12.345). On the other hand, 1− cos2ξ 1 − cos 0 1 − 1 = = =0 0 1+ cos 2ξ 1 + cos 0 1 + 1

lim

ξ

12 347

following Eq. (12.343); comparative inspection of Eqs. (12.346) and (12.347) then supports 1 − cos 2ξ 0 1 + cos 2ξ

lim ξ2 = lim

ξ

0

ξ

12 348

With regard to the first-order derivative of ξ2, one realizes that dξ2 = 2ξ dξ

12 349

based on Eq. (10.29), as well as d 1 −cos 2ξ 2 sin 2ξ 1 −cos 2ξ −2 sin 2ξ − = , dξ 1 + cos 2ξ 1 + cos 2ξ 1 + cos 2ξ 2

12 350

in view of Eqs. (10.48) and (10.119), as well as Eq. (10.205), and then also of Eq. (10.139); after factoring 2 sin 2ξ/(1 + cos 2ξ) out, Eq. (12.350) turns to d 1 −cos 2ξ 2 sin 2ξ 1 − cos 2ξ 1+ = dξ 1+ cos 2ξ 1 + cos 2ξ 1 + cos 2ξ

12 351

481

482

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

In the neighborhood of ξ = 0, Eq. (12.349) is driven by dξ2 = 2 0 = 0, lim ξ 0 dξ

12 352

as enforced by Eq. (9.108); by the same token, Eq. (12.351) becomes lim

ξ

d

0 dξ

1 − cos 2ξ 2 sin 0 1 − cos 0 0 0 1+ = = 1+ = 0, 1 + cos 2ξ 1 + cos 0 1 + cos 0 2 2

and consequently dξ2 d 1 − cos 2ξ = lim lim ξ 0 dξ ξ 0 dξ 1 + cos 2ξ

12 353

12 354

from Eqs. (12.352) and (12.354). For the second-order derivative, one has d 2 ξ2 d 2ξ = 2 = dξ2 dξ

12 355

stemming from Eq. (12.349) and the definition proper, whereas Eq. (12.351) supports d 2 1 −cos 2ξ dξ2 1 + cos 2ξ

d 2 sin 2ξ 1 −cos 2ξ 1+ dξ 1 + cos 2ξ 1 + cos 2ξ = 1+

1 −cos 2ξ d 2 sin 2ξ 2 sin 2ξ d 1 −cos 2ξ + 1 + cos 2ξ dξ 1 + cos 2ξ 1 + cos 2ξ dξ 1 + cos 2ξ

12 356 at the expense of the rule of differentiation of a product; Eq. (12.356) degenerates to d 2 1 −cos 2ξ 1 − cos 2ξ 2 1 + cos 2ξ = 1 + 1 + cos 2ξ dξ +

2 2 cos 2ξ 2 sin 2ξ − 2 sin 2ξ − 1 + cos 2ξ 1 + cos 2ξ 2

2 sin 2ξ 2 sin 2ξ 1 − cos 2ξ 1+ 1 + cos 2ξ 1 + cos 2ξ 1 + cos 2ξ 12 357

with the aid of Eq. (12.351) – which breaks down to d 2 1 −cos 2ξ 4 1 − cos 2ξ cos 2ξ + cos2 2ξ + sin2 2ξ 1 + = 1 + cos 2ξ 1 + cos 2ξ 1 + cos 2ξ dξ2 1 + cos 2ξ +

2 sin 2ξ 1 + cos 2ξ

2

1+

1 − cos 2ξ 1 + cos 2ξ 12 358

upon elimination of the second parenthesis, complemented by factoring out of 4/(1 + cos 2ξ) and 1/(1 + cos 2ξ). Equation (12.358) may be further transformed to d 2 1 −cos 2ξ 4 1 − cos 2ξ 1 + cos 2ξ 2 1 + cos 2ξ = 1 + cos 2ξ 1 + 1 + cos 2ξ 1 + cos 2ξ dξ 2 sin 2ξ + 1 + cos 2ξ

2

1 − cos 2ξ 1+ 1 + cos 2ξ

12 359

Infinite Series and Integrals

in view of Eq. (2.442), which simplifies to d 2 1 − cos 2ξ 4 1 − cos 2ξ 4 sin2 2ξ 1 + = + 1 + cos 2ξ 1 + cos 2ξ dξ2 1 + cos 2ξ 1 + cos 2ξ

2

1+

1 − cos 2ξ 1 + cos 2ξ 12 360

after cancelling out common factors; a final factoring out of 4/(1 + cos 2ξ) and 1 + (1 − cos 2ξ)/(1 + cos 2ξ) permits simplification to d 2 1 − cos 2ξ 4 sin2 2ξ 2 1 + cos 2ξ = 1 + cos 2ξ 1 + 1 + cos 2ξ dξ

1+

1 − cos 2ξ 1 + cos 2ξ

12 361

When x approaches zero, Eq. (12.361) reduces to d 2 1 − cos 2ξ 4 sin2 0 1 + = 0 dξ2 1 + cos 2ξ 1 + cos 0 1 + cos 0

lim

ξ

4 02 1+ = 1+1 1+1

1+

1 − cos 0 1 + cos 0

1 −1 1+ =2 1+1

12 362

as per Eq. (9.108) – thus supporting d 2 ξ2 d 2 1 − cos 2ξ = lim 2 , 2 0 dξ ξ 0 dξ 1 + cos 2ξ

12 363

lim

ξ

upon comparative inspection of Eqs. (12.355) and (12.362). A final exercise of differentiation of Eq. (12.355) yields merely d 3 ξ2 dξ3

d 2 = 0, dξ

12 364

while a similar rationale applied to Eq. (12.361) produces d 3 1 − cos 2ξ dξ3 1 + cos 2ξ

d 4 sin2 2ξ 1+ dξ 1 + cos 2ξ 1 + cos 2ξ = 1+

sin2 2ξ 1 + cos 2ξ

1+

1+

1 − cos 2ξ 1 + cos 2ξ

1 −cos 2ξ d 4 1 + cos 2ξ dξ 1 + cos 2ξ

+

4 1 − cos 2ξ d sin2 2ξ 1+ 1+ 1 + cos 2ξ 1 + cos 2ξ dξ 1 + cos 2ξ

+

4 sin2 2ξ d 1 − cos 2ξ 1+ 1+ 1 + cos 2ξ 1 + cos 2ξ dξ 1 + cos 2ξ 12 365

483

484

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

stemming from Eq. (10.124); use of the rules of differentiation of a sum, a product, a quotient, and a power, together with insertion of Eq. (12.351) transform Eq. (12.365) to d 3 1 − cos 2ξ dξ3 1 + cos 2ξ

=

1+

sin2 2ξ 1 + cos 2ξ

1− cos 2ξ 1 + cos 2ξ

1+

− 4 2 − sin 2ξ 1 + cos 2ξ

2

2 sin 2ξ 2 cos 2ξ sin2 2ξ − 2 sin 2ξ − 1 + cos 2ξ 1 + cos 2ξ 2

+

4 1− cos 2ξ 1+ 1 + cos 2ξ 1 + cos 2ξ

+

4 sin2 2ξ 2 sin2ξ 1− cos 2ξ 1+ 1+ 1 + cos 2ξ 1 + cos 2ξ 1 + cos 2ξ 1 + cos 2ξ

12 366 Condensation of factors alike and factoring out of

2 sin 2ξ in the middle term support 1 + cos 2ξ

transformation of Eq. (12.366) to d 3 1 − cos 2ξ dξ3 1 + cos 2ξ

8 sin 2ξ

=

1 + cos 2ξ + +

1+

2

1 − cos 2ξ 1 + cos 2ξ

1+

1 − cos 2ξ 1 + cos 2ξ

2 cos 2ξ +

2

1+

sin2 2ξ 1 + cos 2ξ

1+

8 sin 2ξ 1 + cos 2ξ

1+

2

8 sin 2ξ 1 + cos 2ξ

sin2 2ξ 1 + cos 2ξ

sin2 2ξ ; 1 + cos 2ξ

1 −cos 2ξ 1 + cos 2ξ 12 367

a final factoring out of (8 sin 2ξ)/(1 + cos 2ξ)2 and 1 + (1 − cos 2ξ)/(1 + cos 2ξ), coupled with lumping of terms alike permit simplification to d 3 1 − cos 2ξ 8 sin 2ξ = dξ3 1 + cos 2ξ 1 + cos 2ξ

1+

2

1 − cos 2ξ 1 + cos 2ξ

2 1 + cos 2ξ +

3 sin2 2ξ 1 + cos 2ξ 12 368

When ξ tends to zero, Eq. (12.368) tends to d 3 1 − cos 2ξ 0 dξ3 1 + cos 2ξ

lim

ξ

= =

8 sin 0 1 + cos 0 80 1+1

2

2

1+

1− cos 0 1 + cos 0

1−1 1+ 1+1

2 1 + cos 0 +

3 02 =0 2 1+1 + 1+1

3 sin2 0 1 + cos 0

;

12 369 one again concludes that d 3 ξ2 d 3 1 − cos 2ξ , = lim 3 3 0 dξ ξ 0 dξ 1 + cos 2ξ

lim

ξ

12 370

Infinite Series and Integrals

after inspecting Eq. (12.369) vis-à-vis with Eq. (12.364). Since (1 − cos 2ξ)/(1 + cos 2ξ) and ξ2 coincide up the third-order derivative (at least) – as specifically per Eqs. (12.354), (12.363), and (12.370) – and there is no reason (besides cumbersomeness) why this process cannot be carried out over and over again with identical findings, Taylor’s theorem as conveyed by Eq. (12.86) may be invoked to corroborate Eq. (12.343) based on coincidence of differential coefficients – in which case Eq. (12.341) will appear as n n

x −1 nk =0

x n

n 2k + 1

k

2 k

2kπ n − x 2kπ n 1 + cos n 1 −cos

x =A n k =1

2

,

12 371

with the aid of Eq. (12.342); multiplication and division of every factor in the right-hand 2kπ 1+ cos n (k = 1,2, …) allows final transformation of Eq. (12.371) to side by 2kπ 1− cos n n n

x −1 nk =0

x n

n 2k + 1

k

2 k

2kπ n x 1− 2kπ n 1 −cos n 1 + cos

x =B n k =1

2

12 372

– as long as constant B abides to n

2kπ n 2kπ 1 + cos n 1 − cos

B

A k =1

12 373

Upon insertion of Eq. (12.372), one obtains n

2kπ n x 1− 2kπ n 1 −cos n 1 + cos

x sin x = B lim n n ∞ k =1

2

12 374

from Eq. (12.335) – where division of both sides by x unfolds n

2kπ n x 1− 2kπ n 1 − cos n 1 + cos

sin x B = lim x nn ∞ k =1

2

;

12 375

when x tends to zero in particular, Eq. (12.375) becomes n

2kπ n x 1− 2kπ n 1 −cos n 1 + cos

sin x B = lim lim lim x 0 x nn ∞x 0 k =1

2

,

12 376

485

486

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where combination with Eq. (9.134) gives rise to 1=

n B B B lim 1 = lim 1n = nn ∞ k =1 nn ∞ n

12 377

– thus implying that Eq. (12.374) should eventually take the form n

2kπ n x 1− 2kπ n 1 − cos n 1 + cos

sin x = x lim n

∞ k =1

2

12 378

At this stage, it is useful to revisit Eq. (2.427) for n = 1, viz. cos2x = −1

2 sin0 x cos2 x + −1 0

0

1

2 sin2 x cos0 x, 2

12 379

or else cos2x = cos2 x− sin2 x

12 380

2 2 = = 1; insertion of Eq. (2.442), previ0 2 2 ously solved for cos x, accordingly produces after recalling that sin0 x = cos0 x = 1 and

cos2x = 1 − sin2 x − sin2 x = 1 − 2 sin2 x,

12 381

thus implying 1 −cos 2x = 2 sin2 x

12 382

– or, in alternative, cos2x = cos2 x− 1 −cos2 x = 2 cos2 x− 1

12 383 2

should the fundamental theorem of trigonometry be solved for sin x and inserted in Eq. (12.380) afterward, thus yielding 1 + cos 2x = 2cos2 x

12 384

Equations (12.382) and (12.384) support transformation of Eq. (12.378) to n

kπ 2 n x , 1− kπ n2 2 sin2 n 2 cos2

sin x = x lim n

∞ k =1

12 385

where cancellation of common constants between numerator and denominator, and kπ division thereof by cos2 , complemented with the definition of tangent as per Eq. n (2.299), give rise to n

1−

sin x = x lim n



k =1

x2 n2 tan2

kπ n

;

12 386

Infinite Series and Integrals

one may now to advantage multiply and divide the second term in parenthesis by k2π 2, i.e. n

1−

sin x = x lim



n

k =1

x2 k 2 π 2 n2 k 2 π 2 tan2

kπ n

,

12 387

2

12 388

which prompts algebraic rearrangement to

n



n

x2

1−

sin x = x lim

k =1

k 2π2

kπ tan n kπ n

Since only the asymptotic behavior at large n is of interest here, one may emphasize this point by rewriting Eq. (12.388) as



k =1

since

kπ n

x2

1−

sin x = x

k 2π2

kπ tan n lim kπ kπ 0 n n

2

,

∞; furthermore, one realizes that

0 when n

sin x 1 sin x 1 1 lim = lim cos x = lim 1= 1=1 x 0 x x 0 cosx x 0 x cos0 1

tan x lim x 0 x

12 389

12 390

with the aid of Eqs. (2.299), (9.87), and (9.137). In view of Eq. (12.390), one can simplify Eq. (12.389) to ∞

1−

sin x = x k =1

after setting x

x kπ

2

12 391

kπ ; if x is, in turn, replaced by πx, then Eq. (12.391) acquires the form n

sin πx = πx

∞ k =1

1−

πx kπ

2

,

12 392

487

488

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

or else sin πx = πx



1−

k =1

x k

2

12 393

after cancelling π 2 between numerator and denominator of every factor. Another quite useful application of Eq. (12.391) is toward the formal calculation of (irrational) number π – after setting x = π/2, viz. π π sin = 2 2



k =1

π 1− 2 kπ

2

;

12 394

after recalling the value of the sine in the left-hand side, one obtains 1=

π ∞ 1 1− 2k =1 2k

2

=

π ∞ 1 π ∞ 4k 2 − 1 1− 2 = 2k =1 4k 2 k = 1 4k 2

π ∞ 2k − 1 2k + 1 π ∞ 2k − 1 ∞ 2k + 1 = = 2k =1 2 k = 1 2k k = 1 2k 2k2k

12 395

following performance of a sequence of elementary algebraic manipulations – where isolation of π finally unfolds n

2k n 2k = 3 141 592 654 … ∞ 2k − 1 k = 1 2k + 1 k =1

π = 2 lim n

12 396

Although its exact value possesses an infinite number of (nonperiodic) decimals, due to its irrational nature, one may go as high in n as upper limit of the extended products in Eq. (12.396) as necessary, for an intended number of (fixed) significant digits.

12.3

Gamma Function and Factorial

Remember that the factorial of a positive integer, n, is, by definition, the product of all positive integers up to n, according to n

n =

k; n = 1, 2,…;

12 397

k =1

for mathematical convenience, the factorial of zero is set as 0

1

12 398

The problem of extending the factorial concept to a noninteger argument, say, x, was first considered by Daniel Bernoulli and Christian Goldbach in the 1720s, and solved by the end of said decade by Leonhard Euler – who originally proposed an infinite product, viz. ∞

x = k =1

1 x k x , 1+ k

1+

12 399

Infinite Series and Integrals

with x! eventually labeled as gamma function of x. It finds application in various fields of process engineering – namely, in description of cumulative processes (i.e. requiring integration) that evolve with x as per a law of the type f {x}exp{−g{x}} – besides constituting the basis for the gamma statistical distribution.

12.3.1

Integral Definition and Major Features

In mathematics, the gamma function, Γ{x}, is an extension of the factorial function to real numbers – with its argument shifted by unity, according to Γ n

n −1 ,

12 400

complemented with Γ x



ξ x −1 e − ξ dξ,

12 401

0

so as to cover all (remaining) noninteger values of x; for convergence of said integral, x must lie above zero. A plot of said function is available as Fig. 12.4. Besides the exact match between the gamma function of x with the factorial of x − 1 when x is a (positive) integer as per Eq. (12.400), one should outline the existence of a vertical asymptote at x = 0 – complemented with a monotonically increasing behavior at large values of x. Upon integration by parts as dictated by Eq. (11.177), one obtains Γ x+1





ξ x e −ξ dξ = ξ x − e − ξ

0

0





xξ x− 1 −e −ξ dξ

12 402

0

from Eq. (12.401), which degenerates to Γ x+1 =

ξx ξx − eξ 0 eξ



+x ∞

ξ x −1 e − ξ dξ

12 403

0

Figure 12.4 Variation of gamma function, Γ(___), with its continuous argument, x, and of factorial (○) with its integer argument, n − 1.

106 105

Γ{x}, (n – 1) !

104 103 102 101 100 10–1

0

1

2

3

4

5 x, n

6

7

8

9

10

489

490

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

also with the aid of Eq. (11.104); since ξx eξ

∞x ∞ = e∞ ∞

= ∞

12 404

is obtained following direct use of the theorems on limits, some way should be found to circumvent formal appearance of the said unknown quantity. Toward this deed, one should first realize that m ≤ x ≤ m + 1, valid for any real x (with m denoting a convenient integer), implies ξm ≤ ξx ≤ ξm + 1

12 405

after taking exponentials of all sides – as long as ξ > 1; consequently, ξm



ξi i

i=0

ξx





i=0

ξi i



ξm + 1 ∞

12 406

ξi i

i=0

once all sides have been divided by

∞ ξ i=0

i

i

> 0. In view of Eq. (12.149), one may rewrite

Eq. (12.406) as ξm



ξi i

i=0



ξx ξm + 1 ≤ ∞ , eξ ξi i=0

12 407

i

which implies, in turn, lim

ξ



ξm



i=0

ξi i

ξx ξm + 1 ≤ lim ∞ ξ ∞e ξ ∞ ξi

≤ lim ξ

i=0

12 408

i

in agreement with Eq. (9.121); since the degree of the polynomial in denominator (i.e. ∞) lies obviously above m + 1 > m, Eq. (12.122) may be invoked to write ξx ≤ 0, ∞ eξ

0 ≤ lim ξ

12 409

compatible with only ξx =0 ∞ eξ

12 410

lim

ξ

Given Eq. (12.410), one may redo Eq. (12.403) to Γ x+1 =

0x −0 + x ξ0

∞ 0

ξ x −1 e − ξ dξ = x



ξ x −1 e − ξ dξ

12 411

0

because x is finite, or else Γ x+1 =xΓ x

12 412

Infinite Series and Integrals

with the aid of Eq. (12.401). When x equals n, the recurrence relation labeled as Eq. (12.412) sequentially produces Γ x+1

Γ n + 1 = nΓ n = n n −1 Γ n− 1 =

x=n

=n Γ 1 ;

12 413

on the other hand, Eq. (12.401) has it that Γ x

∞ x=1

=



ξ1−1 e − ξ dξ =

0

ξ0 e −ξ dξ =

0



e −ξ dξ,

12 414

0

which integrates to Γ 1

Γ x

x=1

= − e−ξ

∞ 0

= e −ξ

0 ∞

= e0 − e− ∞ = 1 −0 = 1

12 415

with the aid of Eqs. (11.104) and (11.160) – so combination of Eqs. (12.413) and (12.415) retrieves Eq. (12.400), and accordingly confirms its validity as a particular case of Γ{x}. The property conveyed by Eq. (12.412) may also be taken advantage of to expand and define an increasing Pochhamer’s operator, (a)k – not to be confounded with that conveyed by Eq. (2.267); after setting k −1

a

a + k −1 ,

a+i =a a+1 a+2

k

12 416

i=0

one may proceed to multiplication and division of the right-hand side by Γ{a} according to a + k −1

a k=

a + 2 a + 1 aΓ a =

Γ a

Γ a+k Γ a

12 417

– where Eq. (12.412) was iteratively applied to the numerator of the right-hand side, Γ{a + k}, to first generate (a + k − 1)Γ{a + k − 1}, to next convert Γ{a + k − 1} to (a + k − 2) Γ{a + k − 2}, and so on until converting Γ{a + 1} to aΓ{a}. The value of Γ{½} is often required, and may be obtained straight from Eq. (12.401) as Γ x

∞ 1 −1 ξ2 e −ξ dξ = 0

1 x= 2



1

ξ − 2 e − ξ dξ;

12 418

0

after defining auxiliary variable ζ as ζ

ξ,

12 419

and thus dξ 1 − 1 = ξ 2 dξ 2 ξ 2



12 420

– besides realizing that lim ζ = ∞,

ξ

12 421



one may transform Eq. (12.418) to Γ x

∞ 1 x= 2

=2 0

e − ζ dζ 2

12 422

491

492

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

The integral in Eq. (12.422) relates directly to Gauss’s error function of infinity, see Eq. (12.174), and is equal to π 2 (to be proven later), so one may redo Eq. (12.422) to Γ x

1 x= 2

= π;

12 423

the values of gamma function for all other positive half-integer values may then be generated from Eq. (12.423), at the expense of Eq. (12.412). The functional relationship labeled as Eq. (12.412) is particularly useful to compute the derivative of the gamma function with regard to its argument – namely, via calculation of the derivatives of both sides, viz. dΓ x + 1 dΓ x =Γ x +x dx dx at the expense of Eq. (10.119); division of both sides by xΓ{x} unfolds dΓ x + 1 dΓ x 1 dx = + dx x Γ x xΓ x

12 424

12 425

or, after combination again with Eq. (12.412), dΓ x + 1 dΓ x 1 dx = + dx x Γ x Γ x+1

12 426

– which may, in turn, be condensed to d ln Γ x + 1 1 d ln Γ x = + 12 427 x dx dx due to the rule of differentiation of a logarithm, see Eq. (10.40). Therefore, Eq. (12.427) consubstantiates a recursive relationship, which can be sequentially applied an extra i times to generate d ln Γ x + 1 1 1 1 d ln Γ x− i + + + = + ; 12 428 x x−1 x− i dx dx when x is an integer, said expansion entails (at most) a finite number of terms, i.e. d ln Γ n + 1 1 1 + = + n n−1 dn or else d ln Γ n + 1 = dn

+

1 1 d ln Γ 1 + + , 2 1 dn

dΓ 1 1 + dn k Γ 1 k =1

12 429

n

12 430

– since d ln Γ{1} = dΓ{1}/Γ{1}, which breaks down to d ln Γ n + 1 = dn

n

1 dΓ 1 + k dn k =1

12 431

in view of Eq. (12.415). The negative of the last term in Eq. (12.431) is known as Euler and Mascheroni’s constant, γ – in honor to Swiss mathematician Leonhard Euler and Italian mathematician Lorenzo Mascheroni, both of the eighteenth century; in other words,

Infinite Series and Integrals

γ

− lim x

1

dΓ x dx

12 432

In view of Eq. (12.432), one ends up with dΓ n+1 dn = Γ n+1

n

1 −γ k k =1

12 433

stemming from Eq. (12.431); after recalling Eq. (12.400), one obtains dΓ n + 1 dn = n

n

1 −γ k k =1

12 434

or, equivalently, dΓ n + 1 = dn

n

1 −γ n k k =1

12 435

upon multiplication of both sides by the factorial of n. On the other hand, Gauss’ multiplication theorem (to be derived later) states that m−1

Γ x+

k =0

as long as x

k = 2π m

m− 1 1 −mx 2 m2 Γ

mx ,

12 436

−k/m; if, in particular, m = 2, then Eq. (12.436) produces

Γ x Γ x+

1 1 −2x 1 = 2π 2 22 Γ 2x , 2

12 437

where Γ{2x} can be isolated and factors algebraically rearranged to give Γ x Γ x+ Γ 2x =

1 2

12 438

π 21− 2x

– oftentimes referred to as duplication formula. Should x be set equal to ½, Eq. (12.438) will degenerate to 1 Γ 2 =Γ 1 = 2

Γ

1 1 1 + Γ 2 2 2 1

π 21 −2 2

Γ =

1 Γ 1 2 , π

12 439

or else Γ

1 2 =1 π

12 440

after dropping Γ{1} from both sides – a result fully consistent with Eq. (12.423), as expected. The values of the gamma function for all other positive half-integral values

493

494

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

may as well be generated from Eq. (12.438) after solving for Γ x + 12 x = n, viz. Γ n+

and setting

1 Γ 2n = π 21−2n ; 2 Γ n

12 441

since n and 2n are both integers (by hypothesis), one may resort to Eq. (12.400) to obtain Γ n+

1 2n − 1 = π 21−2n 2 n−1

12 442

from Eq. (12.441). For instance, n = 1 converts Eq. (12.442) to Γ 1+ – while Γ

12.3.2

π 1 3 2−1 1 =Γ = π 21− 2 = π 2 −1 = 2 2 2 1−1 0

12 443

5 7 ,Γ , … may be likewise generated with n = 2, 3, …, respectively. 2 2

Euler’s Definition

Recall the basic equivalence n nm n nm = n+m n + m n + m−1

n+1 n

,

12 444

arising directly from the definition of factorial – where n! may be dropped from both numerator and denominator, and nm splitted afterward to get n nm n n = n + m n + m −1 n+m

n ; n+1

12 445

when m is a finite number, one realizes that lim

n



n nm nn = lim n ∞n n n+m

n = 1m = 1 n

12 446

Upon multiplication of both sides by m!, Eq. (12.444) may instead appear as m = m lim n



n nm n nmm = lim n ∞ n + m n + m − 1 n + m −2 n+m

m+1 m

, 12 447

which may be rearranged to read m = lim n



n nmm m + n m + n−1

m + n−2

;

12 448

m+1 m

upon cancellation of m! between numerator and denominator, Eq. (12.448) simplifies to m = lim n



n nm m + n m + n−1

m + n−2

12 449 m+1

Infinite Series and Integrals

In view of Eq. (12.400), one may redo Eq. (12.449) to n nm

Γ m + 1 = lim n



m + n m + n−1

m + n− 2

,

12 450

m+1

where Eq. (12.412) and the commutativeness of multiplication in turn support n nm

mΓ m = lim n



m + n m + n−1

m + n −2

n nm = lim n ∞ m+1 m+2 … m+n

m+1

12 451

Despite the derivation above encompassing (positive) integer m, Eq. (12.451) is actually valid for any real number x – which was the major point for defining the gamma function, in the first place; hence, one may rewrite it as n nx

xΓ x = lim n

∞ n

12 452

x+i

i=1

so as to emphasize its general applicability – classically known as Euler’s definition of the gamma function. Equation (12.452) may appear in a slightly different form, viz. Γ x =

n 1 k lim n x , n ∞ x x+k k =1

12 453

which gives rise to Γ x =

n 1 x lim n x 1+ xn ∞ k =1 k

−1

12 454

once the reciprocal of the general factor under the extended product has been taken, and the terms splitted afterward; reciprocals may now be taken of both sides to convert Eq. (12.454) to n

1+

1 k =1 = x lim n ∞ Γ x nx

x k

12 455

After realizing that n

exp − k =1

x = k

1 n

x exp k k =1

1

=

n

x exp k k =1

in agreement with Eq. (2.17), as well as

12 456

495

496

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

n x = exp ln n x = exp x ln n

12 457

based on Eq. (2.25), coupled with exponential and logarithm being inverse functions of each other, one may transform Eq. (12.455) to n

1+

n

x k

1+

x k

n

1 k =1 k =1 1 = x lim exp = x lim n ∞ exp x ln n n ∞ exp x ln n Γ x

x k k =1

n

exp − k =1

x k 12 458

n

n

x x = meanwhile took place. exp k k k =1 k =1

– where multiplication and division by exp

Lumping of exponentials between numerator and denominator, and collapsing of extended products in numerator then generate 1 = x lim exp n ∞ Γ x

n

x − x ln n k k =1

n

1+ k =1

x x exp − k k

12 459

from Eq. (12.458) – where x may be factored out in the argument of the first exponential as n

1 = x exp x lim n ∞ Γ x

n

1 − ln n k k =1

lim

n



1+ k =1

x x exp − , k k

12 460

with the aid of Eq. (9.108). At this stage, it is instructive to revisit Eq. (12.34) – after not1 1 n ing that nk = 1 ak ; k = 1 is a series of positive decreasing terms, and setting f x k x one is indeed led to n

0
0 (with a being usually termed major axis); this is the same as writing F1 P + F2 P = 2a,

13 38

511

512

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

(a)

(b) R

y

Peq,1

P

y

y

C –R

x

0

y

R

Pmd,1 F1 –a –c

b

P

0

x

C

F2 Pmd,2 c a

Peq,2 –b –R x

x

Figure 13.5 Graphical representation of (a) a circumference of radius R, centered at C(0,0) with generic point P described by coordinates (x,y), and (b) an ellipse with generic point P(x,y), foci F1(−c,0) and F2(c,0), minor axis b equal to distance between C and either closest point, Peq,1 or Peq,2, and major axis a equal to distance between C and either farthest point, Pmd,1 or Pmd,2.

as illustrated in Fig. 13.5b. Recalling again the definition of distance along the xy-plane, Eq. (13.38) may be reformulated to x + c 2 + y2 +

x− c 2 + y2 =

x− −c

2

+ y −0 2 +

x−c 2 + y− 0 2 = 2a 13 39

– where x− c 2 + y2 may be moved to the right-hand side, and then squares taken of both sides to get 2

x + c 2 + y2 = 2a−

x−c 2 + y2 ;

13 40

upon application of Eq. (2.237), one can convert Eq. (13.40) to x + c 2 + y2 = 4a2 − 4a

x −c 2 + y2 +

x− c 2 + y2 ,

13 41

whereas further expansion of (x + c)2 and (x − c)2 along the same line of thought yields x2 + 2cx + c2 + y2 = 4a2 − 4a 2

2

x −c 2 + y2 + x2 −2cx + c2 + y2

13 42

2

Cancellation of x + y + c between sides reduces Eq. (13.42) to 2cx = 4a2 − 4a

x− c 2 + y2 − 2cx,

13 43

whereas isolation of the square root gives rise to x −c 2 + y2 = −

4cx − 4a2 ; 4a

13 44

Analytical Geometry

after subdividing the right-hand side as two fractions, Eq. (13.44) becomes c x− c 2 + y2 = a− x a

13 45

Once both sides of Eq. (13.45) are squared, one obtains c 2 c 2 2 x −c 2 + y2 = a− x = a2 −2cx + x a a

13 46

again at the expense of Newton’s binomial formula – and a further expansion of the square of x − c in the left-hand side unfolds x2 − 2cx + c2 + y2 = a2 − 2cx +

c 2 2 x; a

13 47

Eq. (13.47) degenerates to x2 + y 2 + c2 = a2 +

c 2 2 x a

13 48

following cancellation of −2cx between sides, where terms in x2 and independent terms may be grouped as x2 1 −

c a

2

+ y2 = a2 −c2

13 49

Upon elimination of the inner parenthesis in the left-hand side of Eq. (13.49), viz. x2

a2 − c2 + y2 = a2 − c2 , a2

13 50

one may divide both sides by a2 − c2 and accordingly obtain x2 y2 + 2 2 = 1; 2 a a −c

13 51

definition of a new constant b via b2

a2 − c2

13 52

allows final reformulation of Eq. (13.51) to x 2 y 2 + = 1, a b

13 53

usually known as canonic equation of an ellipse. If Eq. (13.37) is rewritten as x 2 y 2 + =1 R R

13 54

following division of both sides by R2, then one confirms – upon comparison with Eq. (13.53), that a circumference is a particular case of an ellipse, corresponding specifically to a = b = R. Obviously that Eq. (13.53) may be solved for y2 as y 2 = b2 1 −

x a

2

13 55

513

514

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

or, after taking square roots of both sides, x 2 ; a

y = ± b 1−

13 56

the plus sign corresponds to the upper portion (Pmd,1Peq,1Pmd,2) of the ellipse, and the minus sign to its lower counterpart (Pmd,1Peq,2Pmd,2) in Fig. 13.5b. In the particular case of point Peq being equidistant from F1 and F2 as outlined in Fig. 13.5b, one may resort to Eq. (13.38) to write F1 Peq + F2 Peq = 2a,

13 57

whereas point (x,y) being equidistant F1 and F2 enforces x− − c

2

+ y− 0 2 =

x −c 2 + y − 0 2 ;

13 58

one may then take squares of both sides to get x + c 2 + y2 = x− c 2 + y2 ,

13 59

while dropping y2 from both sides yields x + c 2 = x −c

2

13 60

Square roots are now to be taken of both sides, thus leaving Eq. (13.60) as x + c = ± x −c ,

13 61

which is equivalent to x + c = x−c x + c = c− x;

13 62

the first condition in Eq. (13.62) can never be fulfilled unless c = 0 (in which case a circumference with coincident foci at its center, rather than an ellipse would be at stake). Conversely, the second condition leads to x = −x

13 63

following cancellation of terms alike between sides, or merely x=0

13 64

Therefore, the points on the plane equidistant from the two foci lie on a vertical straight line passing through the origin, so combination of Eqs. (13.56) and (13.64) produces y = ± b;

13 65

this means that constant b, defined by Eq. (13.52), is but the distance between C(0,0) and either Peq,1 or Peq,2 on the ellipse, i.e. CP eq, 1 = CP eq, 2 = b = CP eq

13 66

as highlighted in Fig. 13.5b – so the distance between Peq,1 and Peq,2 should read Peq, 1 Peq, 2 = 2CP eq or, in view of Eq. (13.66),

13 67

Analytical Geometry

Peq, 1 Peq, 2 = 2b ;

13 68

Eq. (13.68) justifies why b is usually termed minor axis of an ellipse. With regard to the points on the ellipse lying on the straight line defined by F1 and F2, say, Pmd,1 for the one closer to F1 and Pmd,2 for that closer to F2, one realizes that their ordinates must be nil in agreement with the ordinates of F1 and F2 themselves (see Fig. 13.5b); hence, the corresponding abscissae may be obtained from Eq. (13.53) after setting y = 0, viz. x 2 =1 a or, once solved for x,

13 69

x= ±a

13 70

Therefore, the overall distance between Pmd,1 and Pmd,2, geometrically given by Pmd, 1 Pmd, 2 = CP md, 1 + CP md, 2 ,

13 71

should look like Pmd, 1 Pmd, 2 = − a + a ,

13 72

due to Eq. (13.70), coupled to the definition of distance. Transformation of Eq. (13.72) with the aid of Eq. (2.2) leads directly to Pmd, 1 Pmd, 2 = 2a

13 73

– so a has classically been labeled as major axis (as referred to above). There are several distinguishing features exhibited by conic sections in the Euclidean plane – and many of them may be used as a basis to classify such figures; one example is eccentricity, ε, defined here as ratio of distance between either F1 or F2 and C, to distance of either F1 or F2 to either Peq,1 or Peq,2, i.e. ε

F1 C F1 C F2 C F2 C = = = F1 Peq, 1 F1 Peq, 2 F2 Peq, 1 F2 Peq, 2

13 74

after Fig. 13.5b. Taking as case study the first possibility – and remembering that F1(−c,0), Peq,1(0,b), and C(0,0), one attains

ε=

0− −c 0− −c

2

2

+ 0−0

2

= + b −0

2

c2 c2 + b2

13 75

given the quantitative definition of distance between two given points via their Cartesian coordinates; after revisiting Eq. (13.52) as c2 = a2 −b2 ,

13 76

Eq. (13.75) may be redone to ε=

a2 −b2 a 2 − b2 + b2

=

a2 −b2 a2

=

a 2 b2 − a2 a2

13 77

515

516

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

to eventually reach ε=

1−

b a

2

13 78

Since b < a due to the definition of a and b as major and minor axes, respectively, 0 < ε < 1 in the case of an ellipse – while ε = 0 is found for a circle, since a = b = R in this case (as plotted above); the eccentricity may accordingly be viewed as a measure of how far the ellipse deviates from being circular. Conversely, a parabola is characterized by ε = 1, and a hyperbola by ε > 1. The ellipse was first studied by Menaechmus (who died 320 BCE), investigated much later by Euclid, but named by Apollonius (who died c. 190 BCE); the concept of foci was introduced by Pappus of Alexandria (290–350 BCE). All conics abide to Ax2 + Bxy + Cy2 + Dx + Ey + F = 0

13 79

as general Cartesian form – with coefficients being real numbers, and not all A, B, and C being equal to zero (otherwise a straight line would result); which type of conic is at stake depends on the sign of discriminant Δ ≡ B2 − 4AC. In fact, Δ < 0 stands for an ellipse, since Eq. (13.53) becomes b2 x2 + a2 y2 − a2 b2 = 0

13 80 2 2

2 2

after multiplying both sides by a b and moving a b to the left-hand side; this leaves A = b2, B = D = E = 0, C = a2, and F = −a2b2, as well as Δ = 02 − 4b2a2 = −4a2b2 < 0. If A = C = 1, B = D = E = 0, and F = −R2, then a circle of radius R results, see Eq. (13.79) vis-à-vis with Eq. (13.37), again with Δ = 02 – 4 = −4 < 0; a parabola is associated with Δ = 0, and a hyperbola with Δ > 0.

13.4

Length of Line

Consider a generic curve, y ≡ y{x}, laid on the x0y plane, as depicted in Fig. 13.6. For a relatively small increment Δx in x, the variation Δy expected along the curve in the vicinity of x1, should the tendency at said point remain constant, may be approximated by Δy ≈

dy dx

Δx,

13 81

x1

in general agreement with the concept of differential as per Eq. (10.1); the corresponding length Δs of the arc laid on curve y{x} may be approximated by Δσ, i.e. Δs ≈ Δσ

13 82

– equal, in turn, to length AC . Since [ABC] represents a right triangle, one may resort to Pythagoras’ theorem as per Eq. (2.431) to write 2

2

AC = AB + BC or, equivalently,

2

13 83

Analytical Geometry

Figure 13.6 Length, Δs, of plane curve, y ≡ y{x}, corresponding to change Δx in horizontal direction and change Δy in vertical direction, with tangent thereto at abscissa x1 characterized by slope dy , and approximate estimate Δσ of dx x1 Δs extrapolated from (x1,y1) with the aid of said slope.

y

P

∆x

dy dx x1

y2 C

y

∆s

∆y

∆σ

y1

A

0

x1

B

x2

x

x

Δσ 2 = Δx 2 + Δy

2

13 84

in agreement with Fig. 13.6; insertion of Eqs. (13.81) and (13.82) turns Eq. (13.84) to dy Δs ≈ Δx + dx 2

2

2

Δx ,

13 85

x1

which may be algebraically rearranged as Δs 2 ≈ 1 + dy dx

2

Δx

2

13 86

x1

after factoring (Δx)2 out, and further to dy 1+ dx

Δs ≈

2

Δx

13 87

x1

upon taking square roots of both sides. When increment Δx is sufficiently small, Eq. (13.81) degenerates to Eq. (10.1) and thus becomes exact, according to

ds

lim Δs = lim Δσ = lim

Δx

0

Δx

0

Δx

0

1+

dy 2 Δx dx

1+

dy 2 dx dx

13 88

written at the expense of Eqs. (13.82) and (13.87); if dx is taken inside the square root, then Eq. (13.88) degenerates to ds =

dx 2 + dy

2

13 89

517

518

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– particularly useful when the equation describing the curve is given in parametric form. The length AP of a generic arc (AP), defined by point A of coordinates (x1,y1) and point P of coordinates (x,y) as in Fig. 13.6, will then be obtained via integration of Eq. (13.88) between x1 and x, viz. x

AP

dy 2 dx; dx

1+

ds =

AP

x1

13 90

if the curve is given in parametric form, say, x ≡ ϕ1{t} and y ≡ ϕ2{t}, then dx dy

dϕ2 dt – in which case Eq. (13.89) becomes dt t

AP = t1

2

dϕ1 dt

dϕ2 2 dt, dt

+

dϕ1 dt and dt

13 91

provided that x t = t1 = x1 and after factoring (dt)2 out. The above concept of arc length may be applied to any polygon; in the case of a triangle, one should start by computing the slopes of its two defining straight lines, viz. dytrg , 1 h1 = , 13 92 dx x1 based on Eq. (13.31) and applicable within 0 ≤ x ≤ x1 as per Fig. 13.3a; and similarly dytrg , 2 h1 =− , 13 93 dx h2 − x1 stemming from Eq. (13.34) and suitable for x spanning interval [x1,h2]. The perimeter eventually becomes accessible via insertion of Eqs. (13.92) and (13.93) in Eq. (13.90), viz. x1

1+

Ltrg =

dytrg , 1 dx

0

h2

2

dx +

1+

dytrg , 2

2

dx

x1

h2

dx +

1 + 02 dx

13 94

0

– where the overall integral was splitted in three portions corresponding to AB, BC , and 0C, respectively; and the rightward direction of integration was taken in the third segment, for physical consistency with (a positive) length along with a nil slope. Insertion of Eqs. (13.92) and (13.93) transforms Eq. (13.94) to x1

Ltrg = 0 x1

=

1+ − x1

h1 2 dx + h2 −x1

h1 dx + x1

h2

1dx 0

;

h2

2

1+ 0

h2

h1 2 1+ dx + x1

2

1+ x1

h1 dx + h2 − x1

13 95

h2

dx 0

following application of Eq. (11.160), one obtains Ltrg =

1+

h1 2 x x1

x1 0

+

1+

2

=

1+

h1 x1 + x1

1+

h1 2 x h2 − x1 h1 h2 − x1

h2 x1

+x

h2 0

2

h2 − x1 + h2

13 96

Analytical Geometry

due to the constancy of h1, h2, and x1. After factoring x1 and h2 − x1 in (as appropriate), Eq. (13.96) becomes merely x1 2 + h1 2 +

Ltrg =

h2 − x1 2 + h1 2 + h2 ;

13 97

reformulation of Eq. (13.97) to x1 − 0 2 + h1 −0 2 +

Ltrg =

h2 −x1

2

+ 0 −h1

2

h2 −0 2 + 0 − 0

+

2

13 98 stresses that the perimeter of a triangle is but the sum of the distances between corners, i.e. A(0,0), B(x1,h1), and C(0,h2) as per Fig. 13.3a – as expected. For a rectangle, one should to advantage refer to Fig. 13.3b, and realize that side [AB] may be seen as the limit of side [AB] of a triangle with the same height h1 when x1 approaches zero; and that side [CD] may likewise be viewed as the limit of side [BC] of said triangle when x1 approaches h2, while the slope of the defining straight line is trivial, i.e. dyrec 13 99 =0 dx as per Eq. (13.29). A four-way splitting of the integral in Eq. (13.90) is now in order along this rationale, viz. x1

1+

Lrec = lim

0

x1

0

h2

2

dytrg , 1 dx

0

h2

+ lim x1

h2

dytrg , 2

1+

dx

x1

dyrec 2 dx dx

1+

dx +

13 100

,

2

h2

1 + 02 dx

dx + 0

to be used en lieu of Eq (13.94), corresponding to AB, BC , CD, and DA, respectively; Eqs. (13.92), (13.93), and (13.99) may be retrieved to attain x1

h1 2 1+ dx + x1

Lrec = lim x1

0 0

h2

h2

1+ −

2

1 + 0 dx + lim x1

0

h2

x1

h1 2 dx + h2 − x1

h2

dx 0

13 101 0

Hence, the unknown quantity 0 1 + ∞ 2 dx, which would arise twice if direct computation of AB and CD were attempted, is to be somehow circumvented – namely by decoupling calculation of limit and integral. The fundamental theorem of integral calculus may, in fact, be invoked to transform Eq. (13.101) to Lrec = lim x1

0

= lim x1

0

1+

h1 2 x x1

x1 0

+x

h2 0

+ lim x1

h1 2 1+ x1 + h2 + lim x 1 h2 x1

h2

1+

h1 2 x h2 − x1

h1 1+ h2 −x1

h2 x1

+x

h2 0

2

if x1 and h1 − x1 are factored in, then Eq. (13.102) becomes

h2 −x1 + h2

;

13 102

519

520

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

0

x1

h2 − x1

x1 2 + h1 2 + h2 + lim

Lrec = lim

x1

h2

2

+ h1 2 + h2 ,

13 103

while application of Eq. (9.108) supports simplification to Lrec =

h2 − h2

02 + h1 2 + h2 +

2

+ h1 2 + h2 =

h1 2 + h2 +

h1 2 + h2 = h1 + h2 + h1 + h2 13 104

Equation (13.104) finally yields Lrec = 2 h1 + h2

13 105

upon lumping of similar terms – and is independent of the actual coordinates of any corner, but dependent solely on the two characteristic lengths h1 and h2; should h2 = h1 = h, then Eq. (13.105) degenerates to Lsq = 4h

13 106

pertaining to a square. Remember now the equation of a circumference of radius R, and center coinciding with the origin of coordinates (for simplicity), see Eq. (13.37); after isolation of y, one gets y = ± R2 − x2 ,

13 107

corresponding to the upper and lower portions of the circumference – bearing identical lengths due to its intrinsically circular symmetry, with dy −2 x x =± =± dx 2 R2 − x2 R2 − x2 as associated derivative. Insertion of Eq. (13.108) transforms Eq. (13.90) to R

1+ ±

Lcf = 4 0

13 108

2

x

13 109

dx,

R2 − x 2

where one fourth of the full circumference perimeter, Lcf, was considered as per the integral in the right-hand side, spanning [0, R] in agreement with Fig. 13.5a; upon algebraic manipulation via lumping of terms, followed by cancellation of symmetrical terms, x division of numerator and denominator by R2, and replacement of x by as dummy R variable of integration, Eq. (13.109) turns to R

R

x2 1 + 2 2 dx = 4 R −x

Lcf = 4 0

0

R2 − x 2 + x 2 dx R2 − x 2

R

R

R2 dx = 4 R2 − x 2

=4 0

1 0

1

1

= 4R 0

x 1− R

2

d

x R

1−

x R

2

dx

13 110

Analytical Geometry

Once in possession of Eq. (10.177), one can transform Eq. (13.110) to Lcf = 4R sin −1

x 1 π = 4R sin− 1 1 − sin− 1 0 = 4R − 0 R 0 2

13 111

that breaks down to merely Lcf = 2πR

13 112

– often labeled as perimeter of a (full) circumference. Based on trigonometric concepts, namely Eqs. (2.288) and (2.290), one may rewrite Eq. (13.37) as x = R cos θ

13 113

coupled with y = R sin θ

13 114

– where θ denotes angle made by the vector connecting the origin to the point of coordinates (x,y), with the horizontal axis; Eqs. (13.113) and (13.114) represent indeed the parametric equations of a circumference of radius R and centered at the origin. Equation (13.91) may accordingly be revisited as 2π

dx dθ

Lcf = 0

2

+

dy dθ

2

13 115

dθ,

with the germane derivatives looking like dx = −R sin θ dθ

13 116

coupled with dy = R cos θ, dθ

13 117

based on Eqs. (13.113) and (13.114), respectively, coupled with Eqs. (10.44) and (10.48); insertion of Eqs. (13.116) and (13.117) transforms Eq. (13.115) to 2π

− R sin θ 2 + R cos θ 2 dθ =

Lcf = 0

R2 sin2 θ + R2 cos2 θ dθ

0



=



R

sin2 θ + cos2 θ dθ = R

0



dθ = Rθ 0

, 2π 0

13 118

= R 2π − 0 = 2πR

where factoring out of R2 took place, complemented by utilization of the fundamental theorem of trigonometry – and which retrieves Eq. (13.112), as expected. It is instructive to revisit Eq. (2.442) – after multiplying the terms by b2/b2 = 1 and 2 2 a /a = 1, viz. 1 = sin2 θ + cos2 θ =

b2 2 a2 a cos θ sin θ + 2 cos2 θ = 2 a b a

2

+

b sin θ 2 ; b

13 119

521

522

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

comparative inspection of the left- and right-hand sides of Eq. (13.119) with those of Eq. (13.53) indicates that x a cosθ coupled with

13 120

b sin θ

13 121

y

represent an ellipse in parametric form – i.e. via parameter θ, such that 0 ≤ θ ≤ 2π rad. Therefore, the perimeter of an ellipse may be approached following insertion of Eqs. (13.120) and (13.121) in Eq. (13.115), according to 2π

dx dθ

Lell = 0

2

+

dy dθ

2

13 122

dθ,

where the derivatives at stake read dx = −a sin θ dθ

13 123

based on Eq. (13.120), and likewise dy = b cos θ dθ

13 124

based on Eq. (13.121); upon combination of Eqs. (13.122)–(13.124), one obtains 2π

− a sin θ 2 + b cos θ 2 dθ = 4

Lell =

π 2

− a 2 sin2 θ + b2 cos2 θ dθ,

0

0

13 125 after realizing that the pattern is repeated in every quadrant – which breaks down to Lell = 4

π 2

a2 sin2 θ + b2 cos2 θ dθ

13 126

0

Equation (13.126) may be reformulated as Lell = 4

π 2

a2

1 − cos2 θ

π 2 2 2 + b cos θ dθ = 4

0

a2 − a2 −b2 cos2 θ dθ

0

π 2

= 4a 0

13 127 b 1− 1− a

2

cos2 θ dθ

at the expense of the fundamental theorem of trigonometry, followed by factoring out of cos2 θ and taking a2 off the kernel; Eq. (13.78) may then be invoked to get Lell = 4a

π 2

1 − ε2 cos2 θ dθ

13 128

0

The integral in Eq. (13.128) is called an elliptic integral of the second kind, and cannot be evaluated at the expense of a finite number of elementary functions. In fact, Eq. (12.315), after setting b = ½, reads

Analytical Geometry i−1

1 −j 2



1 2

j=0

1+x =1+

i

i=1



x x =1+ + 2

i−1

1 −2j

j=1

i

xi

i−1

2

i

i=2

,

j=0

13 129

i− 1

−1



x =1+ + 2

i−1

2j − 1 j=1 i

xi

2i

i=2

where the first term of the summation was made explicit, and both numerator and denominator were first multiplied i times by 2 and then i − 1 times by −1; if x is now set equal to −ε2cos2θ, after multiplying and dividing the summation by −1, then Eq. (13.129) becomes i −1

2j− 1



− ε2 cos2 θ 1 1 − ε2 cos2 θ = 1 + + 2 −1

j=1

−1

2i i

i=2

i

− ε2 cos2 θ

13 130

i −1

ε2 cos2 θ = 1− − 2

i

2j− 1

∞ j=1

ε2i cos2i θ

2i i

i=2

that converges for all θ in agreement with Eqs. (12.323) and (12.325) – since 0 ≤ ε2cos2 θ < 1, due to 0 < ε < 1 as per Eq. (13.78), coupled with 0 ≤ cos θ ≤ 1 per Fig. 2.10b and (−1)i(−1)i = (−1)2i = ((−1)2)i = 1i = 1. Termwise integration of Eq. (13.128), after combination with Eq. (13.130), produces π 2

i −1

ε2 cos2 θ − 1− 2

Lell = 4a 0

2j − 1

∞ j=1

ε2i cos2i θ dθ

2i i

i=2

13 131 i −1

= 4a

π 2

π 2

ε2 dθ − 2 0



cos θ dθ −

0

2j − 1

j=1

2

i=2

2i i

π 2i 2

ε

cos2i θ dθ

0

with the aid of Eq. (11.22); recalling Wallis’ formula, i.e. i π 2

i

π j=1 cos θ dθ = 2 0

2j −1

2i

i

2j j=1

2j −1 π j=1 = 2 2i i

13 132

523

524

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

as per Eq. (11.192) for a generic i and specifically for i = 1, and combining the result with the fundamental theorem of integral calculus, one can transform Eq. (13.131) to π ε2 π 2 −1 − 1 2 2 2 2 221 1− ε 2 2 π i ∞ = 4a Lell = 4a 2 i 2j −1 2 ∞ 2j− 1 ε2i − π j=1 2j 2i− 1 − ε2i j=1 2 i i=2 2 2i 2i −1 i=2

13 133 – with convenient multiplication and division by 2i − 1, lumping of factors alike and π π 2 π 2 factoring out of π/2, besides realization that dθ = θ = . Equation (13.133) eventually 2 0 0 yields ∞

Lell = 2πa 1 − i=1

i

2

2j− 1 ε2i , 2j 2i− 1 j=1

13 134

which confirms that an infinite number of elementary functions are required to calculate the integral in Eq. (13.128); unfortunately, convergence is rather slow, thus constraining practical applicability of Eq. (13.134). In attempts to find a good approximation to Eq. (13.134), yet bearing a simpler functional form, one should first realize that π π 1 13 135 sin2 = cos2 = 4 4 2 supports π π 13 136 b2 − a2 sin2 = b2 − a2 cos2 4 4 – following multiplication of both left and middle sides by b2 − a2; upon elimination of parentheses and shuffling between sides, Eq. (13.136) becomes π π π π 13 137 b2 sin2 + a2 cos2 = a2 sin2 + b2 cos2 4 4 4 4 π π or, after addition of a2 sin2 and b2 cos2 to both sides, 4 4 π π π 2 2π 2 2π 2 2π 2 a sin + b sin + a cos + b cos2 = 2a2 sin2 + 2b2 cos2 13 138 4 4 4 4 4 4 π π If sin2 and cos2 are factored out in the left-hand side, then Eq. (13.138) becomes 4 4 π π π π 2 2 a + b sin2 + a2 + b2 cos2 = 2a2 sin2 + 2b2 cos2 , 13 139 4 4 4 4 which is equivalent to a 2 + b2 π π π π sin2 + cos2 = a2 sin2 + b2 cos2 4 4 4 4 2

13 140

Analytical Geometry

upon division of both sides by 2 and factoring out of (a2+ b2)/2 afterward; Eq. (13.140) points at a2 + b2 sin2 θ + cos2 θ 13 141 2 – which is obviously exact for θ = π/4, but likely to provide a good approximation in its vicinity. The largest deviations occur, in fact, at θ = 0 – in which case a deviation given by a2 sin2 θ + b2 cos2 θ ≈

a2 sin2 0 + b2 cos2 0 −

a 2 + b2 a2 + b2 2b2 − a2 −b2 a 2 − b2 sin2 0 + cos2 0 = b2 − = =− 2 2 2 2 13 142

will be observed, as well as at θ = π/2 – for which the deviation reads π π a2 + b2 π π a2 + b2 2a2 − a2 − b2 a2 − b2 a2 sin2 + b2 cos2 − sin2 + cos2 = a2 − = = 2 2 2 2 2 2 2 2 13 143

Since the errors conveyed by Eqs. (13.142) and (13.143) are the exact negative of each other, one expects the error in the vicinity of θ = 0 be algebraically compensated by the error in the vicinity of θ = π/2 – so Eq. (13.141) may prove a particularly good approximation when the whole perimeter of the ellipse is sought. One may accordingly proceed with insertion of Eq. (13.141) so as to transform Eq. (13.126) to π 2

Lell = 4 0

a 2 + b2 sin2 θ + cos2 θ dθ, 2

13 144

where Eq (2.442) permits prompt simplification to π 2

Lell = 4 0

a 2 + b2 dθ = 4 2

a 2 + b2 2

π 2

π

dθ = 4

0

a 2 + b2 2 θ =4 2 0

a2 + b2 π ; 2 2 13 145

the final expression looks like Lell ≈ π

2 a 2 + b2

13 146

Note that Eq. (13.146) entertains lim Lell = π

b a

2 a2 + a2 = π 4a2 = 2πa = Lcf

R=a

13 147

as asymptotic pattern – in agreement with Eq. (13.112), pertaining to a circumference of radius R equal to a, which serves as limiting form of an ellipse with a for major axis.

13.5 Curvature of Line Curvature is defined by the variation in direction of angle θ, associated with two successive tangents to a curve, relative to the x-axis – as depicted in Fig. 13.7. Therefore, the

525

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Figure 13.7 Curvature of plane curve, y ≡ y{x}, with tangent thereto at abscissa x1 characterized by angle θ1, and tangent at abscissa x2 characterized by angle θ2 with x-axis, and difference Δθ (defined as θ2 − θ1) measured as angle formed by the normals to said curve at x2 and x1, respectively.

∆θ

∆s

y

526

θ2

θ1 x1

x

x2

smaller the increment in arc length Δs required for a given variation Δθ in angle direction, the smaller the curvature κ – according to κ

lim

Δs 0

Δθ dθ = , Δs ds

13 148

in essence consistent with Eq. (10.21). Recall that angle θ is defined as dy dx at each abscissa, so one may also write θ

tan−1

d dy dθ dx dx = dx dy 1+ dx

2

13 149

13 150

at the expense of Eqs. (10.192) and (10.205) – or else d2 y dθ dx2 = dx dy 1+ dx

2;

13 151

on the other hand, (further) application of the chain differentiation rule has it that dθ dθ dx = ds dx ds

13 152

or, equivalently, dθ dθ dx = ds ds dx

13 153

Analytical Geometry

as per Eq. (10.247). The derivative of length of curve y ≡ y{x} with regard to its independent variable, x, is given by ds = dx

2

dy dx

1+

13 154

consistent with Eq. (13.88), so insertion of Eqs. (13.151) and (13.154) in Eq. (13.153) produces

dθ = ds

d2 y dx2 dy 1+ dx

2

dy 1+ dx

13 155

2

– or, upon straightforward simplification, d2 y dx2

κ= 1+

dy dx

13 156

3 2 2

with the aid of Eq. (13.148). In the case of a circumference, one may retrieve Eq. (13.108) and calculate the secondorder derivative via d2 y dx2

d ± dx

R2 −x2 − x

x R2 −x2

−2 x

2 R2 − x2 = ± 2 R − x2



1 R2 − x2

1 2

+

x2 R2 −x2

3 2

;

13 157 insertion of Eqs. (13.108) and (13.157) then transforms Eq. (13.156) to

± κ cf =

1 1 R2 − x2 2

1+ ±

+

x2

1

3 R2 − x2 2

1 R2 − x2 2

3 2 2

x R2 − x2

=

x2 1+ 2 2 R −x

1+ 1 2

x2 R2 − x2

1

= x2 1+ 2 2 R −x

x2 1+ 2 2 R −x

R2 − x2

,

13 158 3

x2 2 was meanwhile factored out, 1 + 2 2 splitted as where the reciprocal of R −x the product of 1 + x2/(R2 − x2) by its square root, and 1 + x2/(R2 − x2) finally dropped off R2 − x2

527

528

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

both numerator and denominator. After lumping the two outstanding square roots, Eq. (13.158) turns to 1

κcf =

R2 −x2

+ x2

=

1

13 159

R2

upon cancellation of symmetrical terms, which is equivalent to κcf =

1 R

13 160

since square root is the inverse of square power; as anticipated, the curvature of a circumference is constant, irrespective of position along it – and equal to the reciprocal of its radius. If function y ≡ y {x} appears in implicit form, say, Φ ≡ Φ{x, y} = 0 as per Eq. (10.233), then one should compute dy/dx as ∂Φ ∂x dy =− ∂Φ dx ∂y

y

13 161

x

in agreement with Eq. (10.235) – thus leading to dy dx

2

∂Φ ∂x = ∂Φ ∂y

2

13 162

2

after taking squares of both sides; the second-order derivative of y with regard to x is accessible upon retrieval of Eq. (10.244), i.e. ∂Φ 2 ∂Φ ∂ Φ ∂Φ ∂ Φ ∂x ∂ 2 Φ + −2 ∂y ∂x2 ∂x ∂x∂y ∂Φ ∂y2 2 d y ∂y =− 2 dx2 ∂Φ ∂y 2

2

13 163

Combination of Eqs. (10.65), (13.155), (13.162), and (13.163) unfolds ∂Φ 2 ∂Φ ∂ Φ ∂Φ ∂ Φ ∂x ∂ 2 Φ + −2 ∂Φ ∂y2 ∂y ∂x2 ∂x ∂y∂x ∂y 2

2

∂Φ ∂y

dθ =− ds 1+

2

∂Φ ∂x

2

∂Φ ∂y

2

3 2

,

13 164

Analytical Geometry

which readily simplifies to ∂Φ ∂y

κ=

2 2

∂ Φ ∂Φ ∂Φ ∂ 2 Φ ∂Φ + −2 2 ∂x ∂x ∂y ∂y∂x ∂x ∂Φ ∂x

2

∂Φ + ∂y

2 2

∂ Φ ∂y2

13 165

3 2 2

upon multiplication of both numerator and denominator by (∂Φ/∂y)3, and combination with Eq. (13.148) afterward; Eq. (13.165) is useful whenever the curve is defined in implicit form – or mathematical handling of the implicit form is easier than of its explicit counterpart. If Eq. (13.37) is retrieved as x2 + y2 − R2 = 0,

Φ x,y

13 166

the circumference will be described in implicit form – so one may proceed to calculation of the germane partial derivatives as ∂Φ = 2x ∂x

13 167

∂Φ = 2y ∂y

13 168

and

with regard to first-order ones, and ∂2Φ ∂x2

∂ ∂Φ ∂ 2x = 2, = ∂x ∂x ∂x

13 169

∂2 Φ ∂x∂y

∂ ∂Φ ∂ ∂ ∂ ∂Φ ∂2 Φ 2y = 0 = 2x = , = = ∂x ∂y ∂x ∂y ∂y ∂x ∂y∂x

13 170

∂2Φ ∂y2

∂ ∂Φ ∂ 2y = 2 = ∂y ∂y ∂y

13 171

and

encompassing second-order derivatives; insertion of Eqs. (13.167)–(13.171), followed by algebraic rearrangement transform Eq. (13.165) to κ cf =

2y 2 2 −2 2x 2y 0 + 2x 2 2 2x 2 + 2y 2

=

3 2 2

2

8 x +y 23 x2 + y2

1 x2 + y2 2

=

8 x2 + y2

=

3

4 2 x2 + y 2 1 x2 + y2

1 2

3 2

,

13 172

529

Mathematics for Enzyme Reaction Kinetics and Reactor Performance 3

3

3

since 42 = 22 2 = 23 and x2 + y2 2 equals the product of x2 + y2 by its square root – where combination with Eq. (13.37) permits simplification to 1 κcf = 2 R

1 2

1 = , R

13 173

consistent with Eq. (13.160) as expected.

13.6

Area of Plane Surface

As emphasized previously, the most important and useful application of (definite) line or double integrals is calculation of the area of plane surfaces – as illustrated in Fig. 13.8a. The domain Dx,y of interest on the x0y plane is indeed defined by a closed surface – lower bounded by curve described by g1{x} and upper bounded by curve described by g2{x}, such that g1 a = g2 a

13 174

g1 b = g2 b ;

13 175

and

here a and b denote the lowest and highest abscissae, respectively, and Cx,y denotes the whole contour. The defining relationships, y ≡ g1{x} and y ≡ g2{x}, may be trivially formulated in parametric form as x

13 176

t,

(a)

(b) g2{b} g1{b}

g1{b} = g2{b} Cx, y

g2{x}

y

Dx, y

y

530

Dx, y

g2{x} g1{x}

0

g1{x}

g2{a} g1{a}

g1{a} = g2{a} a

b x

0

a

b x

Figure 13.8 Graphical representation of functions g1{x} and g2{x} on the x0y plane, forming (a) contour Cx,y as a whole – both departing from abscissa a and ordinates g1{a} or g2{a}, respectively, and attaining abscissa b and ordinates g1{b} or g2{b}, respectively, with (a) g1{a} = g2{a} and g1{b} = g2{b} or (b) g1{a} g2{a} and g1{b} g2{b}, and serving as lower and upper boundaries for Dx,y as integration domain.

Analytical Geometry

coupled with y

g1 t

13 177

y

g2 t ,

13 178

and

respectively; this implies dx

13 179

dt

stemming immediately from Eq. (13.176), and likewise dy

dg 1 t dt dt

13 180

dy

dg 2 t dt, dt

13 181

and

based on Eqs. (13.177) and (13.178), respectively, besides Eq. (10.1). The area of plane enclosed by the set of curves g1{x} and g2{x} will then become accessible via the particular case of Green’s theorem labeled as Eq. (11.258), i.e. ACx, y =

1 2

b

xdy −ydx +

x=a

1 2

a

xdy −ydx ,

13 182

x=b

where the first integral describes the forward path (i.e. when x increases from a up to b) and the second integral describes the reverse path (i.e. when x decreases from b down to a); insertion of Eqs. (13.176)–(13.181) gives then rise to ACx, y =

1 2

b

t t=a

dg 1 t − g1 t dt

dt +

1 2

a

t t=b

dg 2 t − g2 t dt

dt

13 183

– where dt was factored out in the kernels for convenience, and Eq. (13.176) was taken into account to set the (new) limits of integration in t. Calculation of the integrals in Eq. (13.183) may proceed by first applying Eq. (11.102) as ACx, y =

1 2

b

t a

dg 1 t 1 dt − 2 dt

b

g1 t dt + a

1 2

a

t b

dg 2 t 1 dt − 2 dt

a

g2 t dt,

13 184

b

and then proceeding to integration by parts of the first and third integrals – according to ACx, y =

b b 1 1 b t g 1 t − g1 t dt − g1 t dt a 2 2 a a a a 1 1 a t g 2 t − g2 t dt − g2 t dt + 2 2 b b b

13 185

and consistent with Eq. (11.177); elimination of parenthesis converts Eq. (13.185) to 1 1 1 b 1 b g1 t dt − g1 t dt ACx, y = b g 1 b − a g 1 a − 2 2 2 a 2 a , 1 1 1 a 1 a g2 t dt − g2 t dt + a g2 a − b g2 b − 2 2 2 b 2 b

13 186

531

532

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

whereas condensation of terms alike, and factoring out of a/2 or b/2 (as appropriate) afterward allow simplification to 1 1 ACx, y = a g2 a − g1 a − b g2 b − g1 b − 2 2

b

g1 t dt −

a

a

g2 t dt b

13 187 Equations (13.174) and (13.175) support simplification of Eq. (13.187) to ACx, y = −

b

a

g1 t dt −

13 188

g2 t dt,

a

b

with reversal of integration limits in the second integral at the expense of the preceding minus sign unfolding b

ACx, y =

g2 t dt −

a

b

13 189

g1 t dt a

as per Eq. (11.104) – or else b

ACx, y =

g2 t − g1 t dx

13 190

a

following condensation of the two outstanding integrals, and retrieval of the original notation as per Eq. (13.176). Therefore, the area of the surface bounded by contour Cx,y may be calculated via a line integral as conveyed by Eq. (11.258) or, alternatively, via the integral of the difference between the functions describing the upper and lower bounds of said surface. It should be emphasized here that validity of the said Eq. (11.258) requires that the partial derivatives of P{x,y} and Q{x,y} with regard to y and x as per Eq. (11.251), or equivalently the derivatives of g1{x} and g2{x} with regard to x exist are finite – which is not the case portrayed in Fig. 13.8b; in fact, vertical lines (characterized by infinite slope) appear as descriptors of g2{x} at x = a and g1{x} at x = b (or vice versa, for that matter). In this case, one must resort to Eq. (13.190) when attempting to calculate the surface area under scrutiny. In view of the above considerations, the area of a triangle may be calculated as x1

Atrg =

h2

ytrg , 1 x dx + 0

h1 = x1

x1

ytrg , 2 x dx = 0

x1 x1 0

h1 xdx − h2 − x 1

h2

h1 h2 xdx + h2 −x1 x1

h1 xdx + x1

h2

h2 x1



h1 h1 h2 dx x+ h2 − x1 h2 −x1

,

dx x1

13 191 based on Eq. (13.190) upon combination with Eqs. (13.31) and (13.34) as side descriptors of said polygon (see Fig. 13.3a) – after splitting the original integration range [0,h2] as [0,x1] and [x1,h2], with g1{x} = 0, and g2{x} = ytrg,1{x} or g2{x} = ytrg,2{x}, respectively, and resorting to Eq. (11.124) as tool for stepwise integration. The fundamental theorem of integral calculus may then be invoked to write

Analytical Geometry

Atrg =

h1 x2 x1 2

x1

− 0

h1 x2 h2 − x1 2

h1 h22 −

h2

+ x1

x12

h1 x 1 − = 2 2 h2 −x1

h1 h2 x h2 − x1

h2 x1

=

h1 x12 h1 h2 2 x 1 2 h1 h2 + − − h2 −x1 x1 2 h2 −x1 2 2 h2 − x1

,

+ h1 h2 13 192

stemming from Eqs. (13.191); replacement of the difference of squares in the numerator of the second term by the product of the conjugates of its bases leads to h1 x1 h1 h2 −x1 h2 + x1 h1 x1 h1 h2 + x1 + h1 h2 = − − + h1 h2 2 2 2 h2 −x1 2 , 13 193 h1 x1 h1 h2 h1 x1 h1 h2 − − + h1 h2 = h1 h2 − = 2 2 2 2 where common factors were meanwhile dropped from both numerator and denominator, the distributive property applied, and terms cancelled out with their negatives. Condensation of the outstanding terms in Eq. (13.193) finally gives Atrg =

1 Atrg = h1 h2 2

13 194

– so the area of a triangle depends only on its width, h2, and height, h1, but not on any particular coordinates of its vertices. A simpler rationale (yet similar in terms of approach) can be applied to the descriptor yrec ≡ yrec{x} as given by Eq. (13.29) toward calculation of the area of a rectangle – according to h2

Arec =

h2

yrec dx = 0

h2

h1 dx = h1 0

dx = h1 x 0

h2 0

,

13 195

following Fig. 13.3b; the full original integration range, [0,h2], was now utilized without interruption, while still considering g1{x} = 0 – and realizing that g2{x} = yrec = h1. Equation (13.195) is (trivially) equivalent to Arec = h1 h2

13 196

– i.e. it suffices to multiply the width by the height of a rectangle to ascertain its area, irrespective of the actual coordinates of its corners. In the case of a square, Eq. (13.196) reduces to Asq = h2 ,

13 197

since h2 = h1 = h (say). The symmetry of a circle, centered at the origin, with regard to both the x- and the y-axes permits calculation of its area as four times the area of the corresponding quarter-circle, i.e. R R

R2 − x2 −0

Acr = 4

R

dx = 4

0

0 1

1−

= 4R2 0

x 2 x d R R

R2 − x2

1−

dx = 4R 0

x 2 dx R ,

13 198

533

534

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

based on Eq. (13.190) and Fig. 13.5a; here g2{x} takes the positive version of Eq. (13.107), and g1{x} obviously coincides with the horizontal axis – while R2 was taken off the kernel, and integration variable x swapped for x/R as a matter of convenience. Definition of an auxiliary variable ξ via x R

cosξ,

13 199

inspired in Eq. (13.113) – and, consequently, ξ

x R =0

= cos −1

x R

= cos −1

x R

x R =0

π 2

13 200

= cos − 1 1 = 0,

13 201

= cos − 1 0 =

and ξ

x R =1

x R =1

allows calculation of its differential as d

x R

− sin ξ dξ

13 202

as per Eqs. (10.1) and (10.48); hence, Eq. (13.198) transforms to 0

Acr = − 4R2

π 2

π 2

1− cos2 ξ sin ξ dξ = 4R2

1 − cos2 ξ sin ξ dξ,

13 203

0

with the aid of Eq. (11.104). In view of the fundamental theorem of trigonometry, 1 − cos2 x may be replaced by sin2 x, so Eq. (13.203) becomes Acr = 4R2

π 2

sin2 ξ sin ξ dξ = 4R2

0

π 2

sin ξ sin ξ dξ = 4R2

0

π 2

sin2 ξ dξ;

13 204

0

Eq. (2.414) may now be retrieved as

sin2 θ = sin2n θ

2 +2 1 n=1

=

0

−1

1 −i

i=0

2 cos 2 1 − i θ i

4

,

13 205

2 − 2 20 cos 2 1 − 0 θ 2 − 2 cos 2θ 1− cos2θ = = = 4 2 4 thus supporting transformation of Eq. (13.204) to Acr = 4R

π 2 2 0

= 2R

2

1 − cos 2ξ dξ = 2R2 2 π 2 0

dξ −

π 2 0

π 2

1 − cos 2ξ dξ

0

cos 2ξ dξ = 2R

2

π 2

1 dξ− 2 0

13 206

π

cos 2ξ d 2ξ 0

Analytical Geometry

upon straightforward algebraic manipulation with the aid of Eq. (11.102), and including change in (dummy) variable from ξ to 2 ξ in the second integral. Application of Eq. (11.160) justifies conversion of Eq. (13.206) to 2

Acr = 2R

π 2

1 ξ − sin 2ξ 0 2

π 0

= 2R2

π 1 1 π − 0 − sin π + sin 0 = 2R2 , 2 2 2 2

13 207

or else Acr = πR2

13 208

One may instead recall the parametric equations that define a circumference, i.e. Eqs. (13.113) and (13.114), and obtain dx = − R sin θ dθ

13 209

dy = R cosθ dθ,

13 210

and respectively, after taking differentials thereof; combination with Eqs. (13.113), (13.114), (13.209), and (13.210) transforms Eq. (11.258) to Acr =

1 2

1 = 2



R cos θ R cosθ dθ −R sin θ −R sin θ dθ

0 2π

1 R cos θ + R sin θ dθ = R2 2 2

0

2

2



2

13 211 sin θ + cos θ dθ 2

2

0

– since [0,2π] accounts for one full rotation (or cycle), as per polar coordinate θ. The fundamental theorem of trigonometry permits final simplification of Eq. (13.211) to 1 Acr = R2 2

2π 0

1 dθ = R2 θ 2

2π 0

1 = R2 2π = πR2 2

13 212

that mimics Eq. (13.208), as expected. In the particular case of a portion of a circle – also known as circular sector of amplitude ϕ, its surface area, Asc, may likewise be calculated as 1 Asc = R2 2

ϕ

1 dθ = R2 θ 2 0

ϕ

13 213

0

based on Eq. (13.212) after exchanging 2π by ϕ as upper limit of integration; one promptly obtains 1 Asc = R2 ϕ, 2

13 214

with 0 ≤ ϕ ≤ 2π – so Eq. (13.212) turns out a particular case of Eq. (13.214), described by ϕ = 2π. In the case of an ellipse, one may to advantage recall its parametric formulae as per Eqs. (13.120) and (13.121), as well as the corresponding derivatives conveyed by Eqs. (13.123) and (13.124) – and accordingly transform Eq. (11.258) to Aell =

1 2

2π 0

1 a cosθ b cosθ dθ − b sin θ −a sin θ dθ = ab 2



cos2 θ + sin2 θ dθ,

0

13 215

535

536

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with ab taken off the kernel; Eq. (2.442) may again be claimed, to simplify the above integral to 2π

1 Aell = ab 2

0

1 dθ = ab θ 2

2π 0

1 = ab2π 2

13 216

that breaks down to Aell = πab

13 217

As anticipated, lim Aell = πbb = πb2 = πR2

a b

R=b

= Acr

13 218

R=b

stems from Eqs. (13.208) and (13.217) – meaning that the area of an ellipse tends to the area of a circle when its major axis tends to its minor axis, characterized by a radius equal to either one.

13.7

Outer Area of Revolution Solid

When the length of a curve lying of a (bidimensional) plane was tackled, one concluded that b

1+

ds =

L= L

a

dy 2 dx, dx

13 219

as per Eq. (13.90) – as a consequence of a single degree of freedom, with ds consubstantiating the simultaneous contributions of dx and dy as per Eq. (13.89); in the case of the side outer area, As,out, of a revolution solid, a similar situation occurs, i.e. L 2π

dA =

As, out As, out

θ, s

ydθ d s =

13 220

ydθds, 0

0

where ydθ represents variation in length, along polar coordinate θ – with radial distance denoted by y, and ds represents variation in length along axial coordinate x and radial coordinate y simultaneously, as depicted in Fig. 13.9. Insertion of Eq. (13.89) and realization that y ≡ y{x} transform Eq. (13.220) to b 2π

As, out =

ydθ

2



2

dx + dy =

x, y 0

y x x=a

θ=0

1+

dy 2 dθ dx, dx 13 221

where (dx)2 was meanwhile factored off the root; independence of y on θ permits the double integral be expressed as a product of two single integrals, i.e. b 2π

As, out =

dθ 0

y x a

1+

dy 2 dx, dx

13 222

consistent with (the weak version of) Fubini’s theorem. Direct application of the fundamental theorem of integral calculus allows simplification of Eq. (13.222) to

Analytical Geometry

(a)

(b) y{b}dθ

y{b}

y

y



y L y{a}

y{a}dθ

0 a

x

x

b

0

Figure 13.9 Graphical representation, (a) sidewise and (b) frontwise, of plane curve (––––) with overall length L, described by radial coordinate y ≡ y{x} when axial coordinate x varies from a to b, (b) undergoing rotation (___) along angular coordinate θ, of amplitude dθ spanning [0,2π]. b

As, out = θ

2π 0

dy 2 dx, dx

1+

y x a

13 223

and finally b

As, out = 2π

y x

1+

a

dy 2 dx dx

13 224

When either y{x}|x=a 0 or y{x}|x=b 0, then extra outer area, Ab,out, arises that accounts for the base(s) of the revolution solid; this corresponds to Ab, out = π y x

2 x=a

+π y x

2 x=b

,

13 225

obtained with the aid of Eq. (13.208) – because circles would be at stake. Equation (13.225) is equivalent to Ab, out = π y2 a + y2 b ,

13 226

after factoring π out and simplifying notation; the total outer area, Aout, will obviously look like Aout = As, out + Ab, out ,

13 227

following combination of Eqs. (13.224) and (13.226). The simplest situation of a revolution solid pertains to a cylinder – seen as the outcome of rotating a rectangle, with width h2 and height h1 as per Fig. 13.3b, by 2π rad around the horizontal axis; Eqs. (13.29) and (13.99) may accordingly be inserted in Eq. (13.224) to produce

537

538

Mathematics for Enzyme Reaction Kinetics and Reactor Performance h2

As, out, cyl = 2π

yrec

1+

0

dyrec 2 dx = 2π dx

h2

h1

1 + 02 dx = 2πh1

0

h2

dx = 2πh1 x 0

h2 0

;

13 228 trivial rearrangement simplifies Eq. (13.228) to As, out , cyl = 2πh1 h2

13 229

– which normally appears as As, out , cyl = 2πRh2 ,

13 230

since h1 is termed cylinder radius, R. With regard to the area of the bases, one should resort to Eqs. (13.29) and (13.226) to write Ab, out , cyl = π h12 + h12 ,

13 231

because y{0} coincides, in this case, with y{h2} – which, in turn, equals h1; Eq. (13.231) turns to merely Ab, out , cyl = 2πh12 ,

13 232

or else Ab, out , cyl = 2πR2

13 233

after recalling the aforementioned (more usual) designation of h1. In what concerns a cone (see Fig. 13.4), it may be generated via half rotation of an isosceles triangle, around its heigth – characterized by x1= h2/2 (see Fig. 13.3a, with x1 = x2/2); this calls for simplified versions of Eqs. (13.31) and (13.34), viz. ytrg , 1

h x1 = 22

=

h1 h1 x=2 x h2 h2 2

13 234

and ytrg , 2

h x1 = 22

=−

h1 h2 h2 − 2

x+

h1 h2 h1 h1 h2 h1 = − x+ = − 2 x + 2h1 , h2 h2 h2 h2 h2 − 2 2 2

13 235

respectively. Replacement of x by h2 − x in Eq. (13.234) produces =2

ytrg , 1 h2 x1 = 2 x h2 −x

h1 h1 h1 h1 h2 −x = 2 h2 −2 x = 2 h1 − 2 x = ytrg , 2 h2 h2 h2 h2

h x1 = 22

13 236

with the aid of Eq. (13.235); hence, the triangle under scrutiny is symmetrical relative to vertical line of equation x = h2/2, as expected. Such a symmetry justifies why the associated cone can be generated via full rotation of half the original triangle around its vertical axis of symmetry, i.e. the triangle with corners defined by points of coordinates (0,0), (h2/2,0), and (0,h1) referring to Fig. 13.3a – and with a single descriptor side (for being a right triangle), of equation

Analytical Geometry

Ytrg = −

2h1 X + h1 ; h2

13 237

this is defined by the latter two points, and is to to be used en lieu of Eq. (13.34), with X spanning the interval [0,h2/2]. Isolation of X in Eq. (13.237) leads to X=−

h2 h2 h2 Y − h1 = − Y+ , 2h1 2h1 2

13 238

meaning that Y should span the interval [0,h1] (to guarantee that X ≥ 0); since X and Y are but dummy variables, one may for convenience replace them by y and x, respectively, according to y=

h2 h2 − x 2 2h1

13 239

– which implies dy h2 =− dx 2h1

13 240

after differentiating both sides with regard to x. Insertion of Eqs. (13.239) and (13.240) transforms Eq. (13.224) to h1

h2 h2 − x 2 2h1

As, out , con = 2π 0

1+ −

h2 2 dx = 2π 2h1

1+

h2 2h1

2 h1 0

h2 h2 − x dx, 2 2h1 13 241

where rotation takes place around the horizontal axis, due to the change of variables when going from Eq. (13.238) – entailing rotation around the vertical axis, to Eq. (13.239) – entailing rotation around the horizontal axis; Eq. (13.241) may, in turn, be decomposed as As, out , con = 2π

1+

2

h2 2h1

h2 2

h1

dx− 0

h1

h2 2h1

13 242

xdx 0

Recalling Eq. (11.160), one may proceed to As, out , con = 2π



1+

1+

2

h2 2h1 h2 2h1

h2 x 2

h1 0

2

h1 h2 −



h2 x2 2h1 2

h1

= 2π 0

h1 h2 h1 h2 =π 2 2

1+

1+

h2 2h1

h2 2h1

2

h2 h2 h12 h1 − 2 2h1 2

2

13 243 using Eq. (13.242) as departure point, coupled with self-explanatory algebraic steps – to finally attain As, out , con = π

h2 2

h12 +

h2 2

2

13 244

539

540

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

once h1 is factored in. The half-width of the generating triangle, h2/2, is oftentimes termed cone radius R – in which case Eq. (13.244) may be recoined as As, out , con = πR

R2 + h12 ;

13 245

on the other hand, Eq. (13.239) yields h2 h2 h2 = − 0= y x=0 2 2h1 2 and h2 h2 h2 h2 y = − h1 = − = 0, x = h1 2 2h1 2 2

13 246

13 247

so the area of the (single) base of the cone – associated to y

x=0

, ensues from

Eq. (13.226) as 2

h2 2

Ab, out , con = π

+ 02 = π

h2 2

2

13 248

or, equivalently, Ab, out , con = πR2 ,

13 249

again as expected for a circular base due to Eq. (13.208). In attempts to ascertain the outer area of a sphere, one may retrieve the positive parts of both Eqs. (13.107) and (13.108), and insert them in Eq. (13.224) to get R

R2 −x2

As, out , sph = 2 2π

1+

0 R

R2 −x2 +

= 4π 0

R

R2 dx = 4π 0

2

dx

R2 − x2

x2 R2 − x2 dx = 4π R2 −x2

R

= 4π

x

R

R2 − x2 + x2 dx

13 250

0

R

Rdx = 4πR x = 4πRR, 0

0

– together with lumping of square roots, dropping of R2− x2 between numerator and denominator under the root, and cancellation of symmetrical terms afterward; the extra factor 2 introduced prior to the integral accounts for the fact that [0,R] as range of integration produces only half the total outer area of a sphere, for a rotation of 2π rad. Equation (13.250) is equivalent to As, out , sph = 4πR2 ,

13 251

after having lumped factors alike. Although y{a} 0, the existence of an adjacent halfsphere precludes existence of any extra outer surface area (in the form of a hypothetical base) – so Eq. (13.226) turns out irrelevant in this case. When an ellipsoid is obtained by rotating an ellipse in full around one of its axes, one should resort to either function labeled as Eq. (13.56), say, the one with the plus sign – to yield

Analytical Geometry

dy =b dx

x1 aa x 1− a

−2 2

2

b =− a

x a x 1− a

13 252

2

as corresponding derivative with regard to x, see Fig. 13.5b. Insertion of Eqs. (13.56) and (13.252) transforms Eq. (13.224) to a

x b 1− a

As, out , ell = 2 2π



2

1+

0

b a

2

x a

x 1− a

2

2

a

1−

dx = 4πb 0

x2 b + a a

2

x a

2

dx

1

1− 1−

= 4πab

b a

2

x2 x d a a

0

13 253 – where again 2 was inserted as done for the sphere, since only the right half of the curve in the first quadrant was considered; and algebraic manipulation took place likewise, except for setting x/a as dummy variable of integration (upon multiplication and division by a) and factoring out of (x/a)2 under the square root sign. In view of Eq. (13.78) defining eccentricity, one may redo Eq. (13.253) to 1

1 − ε2

As, out , ell, pr = 4πab 0

x 2 x d a a

13 254

– where 0 < ε < 1 will apply in the case of a prolate, i.e. with revolution around the major axis of the ellipse (since b/a < 1 implies 1 − (b/a)2 > 0); one may therefore define an auxiliary variable, ξ, according to x 13 255 sin ξ ε , a or else x sin ξ = 13 256 a ε that implies x cos ξ dξ = a ε

d

13 257

after applying differentials to both sides. Equation (13.255) also supports sin ξ

x a =0

= 0,

13 258

and thus ξ

x a =0

= sin − 1 0 = 0;

13 259

and likewise sin ξ

x a =1



13 260

541

542

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

that is equivalent to ξ

x a =1

= sin −1 ε

13 261

Insertion of Eqs. (13.255), (13.257), (13.259), and (13.261) converts Eq. (13.254) to sin− 1 ε

As, out , ell, pr = 4πab

1 −sin2 ξ

0

cos ξ dξ, ε

13 262

whereas Eq. (2.442) allows further transformation to sin−1 ε

As, out , ell, pr = 4πab

cos2 ξ

0

ab = 4π ε

sin− 1 ε

cos ξ ab dξ = 4π ε ε

sin−1 ε

cos ξ cos ξ dξ

0

;

13 263

cos2 ξ dξ

0

the reduction formula labeled as Eq. (11.184) may now be taken advantage of to write cos2 ξ dξ

cos2n ξ dξ

n=1

=

2 −1 sin ξ cos2− 1 ξ cos2−2 ξ dξ + 2 2

1 sin ξ cos ξ 1 sin ξ cos ξ = dξ + = ξ+ 2 2 2 2

13 264

Combination with Eq. (13.264) transforms Eq. (13.263) to ab ξ + sin ξ cos ξ As, out , ell, pr = 4π ε 2 = 2π

sin −1 ε 0

ab = 2π ε

sin − 1 ε + sin sin − 1 ε cos sin −1 ε − 0 + sin 0 cos 0

,

ab sin −1 ε + ε cos sin − 1 ε ε 13 265

with the aid of the fundamental theorem of trigonometry – where advantage was gained from composition of a function with its inverse, leaving the original argument unchanged (besides sin 0 = 0). On the other hand, ζ implies

sin− 1 ε

13 266

sin ζ = ε

13 267

and, consequently, sin2 ζ + cos2 ζ = ε2 + cos2 ζ = 1

13 268

as per the fundamental theorem of trigonometry; isolation of cos ζ unfolds 2

cos2 ζ = 1 −ε2 ,

13 269

where square roots may be taken of both sides to obtain cos ζ = 1− ε2

13 270

Analytical Geometry

Combination of Eqs. (13.266) and (13.270) gives rise to cos sin− 1 ε = 1− ε2 ,

13 271

a result useful to transform Eq. (13.265) to As, out , ell, pr = 2π

ab sin−1 ε + ε 1 − ε2 = 2πab ε

1 − ε2 +

sin−1 ε ; ε

13 272

retrieval of the original notation via Eq. (13.78) – complemented by cancellation of symmetrical terms, and then by squaring of the square root with eventual factoring out of b/a, produce

b a

1− 1−

As, out , ell, pr = 2πab

b a

= 2πab

2

2

1−

1−

+ b 1− a

b 1+ a b a

b a

1− 1−

1−

+

sin− 1

sin− 1

= 2πab

sin−1

b a

b a

b a

b a

2

2

2

= 2πab

2

b + a

sin− 1

1− b 1− a

b a 2

2

,

2

2

13 273 or equivalently sin−1

1−

b a

b 1− a

As, out , ell, pr = 2πb2 1 +

b a

2

13 274

2

after final dropping of a between numerator and denominator. It should be emphasized that lim As, out , ell, pr = lim As, out , ell, pr = 2πb2 1 +

a b

b a

1

= 2πb2 1 +

sin− 1 1 −12 1 1 −12 ,

−1

sin 0 0 = 2πb2 1 + 1 0 0

13 275

543

544

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

stemming from Eq. (13.274) upon direct application of classical theorems on limits; in attempts to remove the unknown quantity found, one may resort to l’Hôpital’s rule as per Eq. (10.309) to write −2 2

sin− 1

1−

lim

b a

b 1− a

1

b a

2

1−

1− b a

−2

1

2

2

b a

b 1− a

= lim

2

b a

1−

2

2

13 276

b a 2

b a

– also with the aid of Eqs. (10.33), (10.177), and (10.205), where cancellation of common factors between numerator and denominator unfolds sin− 1

1−

lim

b a

1

1−

b a

b a 2

2

1

= lim b a

1

1− 1−

1

= lim b a

b a

2

b a

1

2

1 =1 b 1 a

= lim b a

13 277 In view of Eq. (13.277), one may redo Eq. (13.275) as

lim As, out , ell, pr = 2πb2 1 + lim

a b

b a

1

1 lim b b 1 a a

sin−1

1− b a

1−

b a 2

2

1 = 2πb2 1 + 1 1

13 278 that finally yields lim As, out , ell, pr = 4πb2 = 4πR2

a b

R=b

= As, out , sph

R=b

,

13 279

with the aid of Eq. (13.251); this means that the outer area of a prolate ellipsoid, when its major axis approaches its minor counterpart, will coincide with the outer area of a sphere characterized by that radius (as anticipated). Should revolution of the elliptic curve take place around the minor axis, an oblate would be generated instead; this corresponds to switching the minor and major axes in advance, so that Eq. (13.253) still applies – but b/a > 1 in this case, so 1 − (b/a)2 < 0. Hence, one should to advantage redo Eq. (13.253) to

Analytical Geometry 1

As, out , ell, ob = 4πab

b a

1+ 0

2

−1

x 2 x , d a a

13 280

where the factor preceding (x/a)2 being positive suggests a variable change of the type b a

tan ξ

2

−1

x x =η ; a a

13 281

here η stands for an auxiliary constant, abiding to b a

η

2

−1

13 282

and inspired by the definition of ε as per Eq. (13.78). Upon isolation of x/a, Eq. (13.281) becomes x tan ξ = , a η

13 283

and differentiation of both sides unfolds d

x 1 = sec2 ξ dξ a η

13 284

with the aid of Eq. (10.143). If one isolates ξ instead, then Eq. (13.281) appears as x 13 285 ξ = tan−1 η , a thus supporting ξ

x a =0

= tan− 1 0 = 0

13 286

after setting x/a equal to zero – and, similarly, ξ

x a =1

= tan− 1 η

13 287

after setting x/a equal to unity; upon insertion of Eqs. (13.281), (13.284), (13.286), and (13.287), one gets tan−1 η

As, out , ell, ob = 4πab

1 + tan2 ξ

0

1 ab sec2 ξ dξ = 4π η η

tan−1 η

1 + tan2 ξ sec2 ξ dξ

0

13 288 from Eq. (13.280). Recalling Eq. (2.471), one may redo Eq. (13.288) to As, out , ell, ob = 4π

ab η

ab = 4π η

tan−1 η

sec2 ξ sec2 ξ dξ = 4π

0 tan−1 η 0

sec3 ξ dξ

ab η

tan− 1 η 0

sec ξ sec2 ξ dξ 13 289

545

546

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Integration by parts following Eq. (11.39) allows transformation of the integral in the right-hand side of Eq. (13.289) to sec3 ξ dξ = sec2 ξ sec ξ dξ = tanξ sec ξ− tan ξ tan ξ sec ξ dξ 13 290

, = tan ξ sec ξ− tan ξ secξ dξ 2

where Eqs. (10.143) and (10.153) were taken on board; after revisiting Eq. (2.471) as tan2 ξ = sec2 ξ− 1,

13 291

Eq. (13.290) may be further transformed to sec3 ξ dξ = tan ξ sec ξ −

sec2 ξ− 1 sec ξ dξ = tan ξ secξ −

sec3 ξ− sec ξ dξ ,

= tan ξ sec ξ − sec3 ξ dξ +

sec ξ dξ 13 292

after factoring in sec ξ at the kernel and splitting the integral as per Eq. (11.22). Once integrals alike are pooled together, Eq. (13.292) becomes 2 sec3 ξ dξ = tan ξ secξ +

secξ dξ,

13 293

or else sec3 ξ dξ =

1 tan ξ secξ + 2

sec ξ dξ

13 294

upon division of both sides by 2; on the other hand, the integral in the right-hand side is given by Eq. (11.72), so one attains sec3 ξ dξ =

1 tan ξ sec ξ + ln tan ξ + sec ξ 2

13 295

In view of Eqs. (11.160) and (13.295), one may redo Eq. (13.289) to As, out , ell, ob = 4π

ab 1 tan ξ sec ξ + ln tan ξ+ sec ξ η 2

tan −1 η 0

tan tan− 1 η sec tan−1 η = 2π

ab η

+ ln tan tan− 1 η + sec tan− 1 η ,

− tan 0 sec0 − ln tan 0 + sec 0 = 2π

ab η sec tan− 1 η + ln η + sec tan− 1 η η

= 2π

ab η sec tan−1 η + ln η + sec tan−1 η η

− 0 − ln 1

13 296

Analytical Geometry

via cancellation of tangent through composition with its inverse, and realization that tan 0 = 0 and sec 0 = 1/cos 0 = 1/1 = 1. If an auxiliary variable, ω, is defined as ω

tan−1 η,

13 297

one immediately concludes that tan ω = η;

13 298

recalling Eq. (2.471) again, one may combine it with Eq. (13.298) to write sec2 ω = 1 + tan2 ω = 1 + η2 ,

13 299

so sec ω will eventually look like sec ω =

1 + η2

13 300

after taking square roots of both sides – or, after inserting Eq. (13.297), sec tan− 1 η =

1 + η2

13 301

Equation (13.301) allows transformation of Eq. (13.296) to

As, out , ell, ob = 2π

ab η η

1 + η2 + ln η +

1 + η2

1 + η2 +

= 2πab

ln η +

1 + η2 η

,

13 302 with 1/η meanwhile factored in – where Eq. (13.282) permits recovery of the original notation as

As, out , ell, ob = 2πab

b 1+ a

= 2πab

b a

ln b + = 2πab a

2

−1 +

2

−1 + b a

b a

ln

2

b a

ln

+

2

−1 + b a

b + a

b a b a

b a

1+

b a

2

−1

2

−1 2

,

2

−1

2

−1

2

−1 13 303

547

548

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

together with cancellation of unity with its negative under the square root signs, and composition of square power with square root; b/a may finally be factored out to get ln

b + a

As, out , ell, ob = 2πb2 1 + b a

2

b a b a

−1 ,

2

13 304

−1

where b/a > 0, besides (b/a)2 − 1 > 0 for an oblate ellipsoid, were taken advantage of to remove the modulus sign. When a gets closer and closer to b, Eq. (13.304) is driven by 2

lim As, out , ell, ob = lim As, out , ell, ob = 2πb

a

b a

b

1+

ln 1 + 12 −1

1

1 12 − 1

13 305

ln 1 + 0 0 = 2πb2 1 + 0 0 – obtained after blindly resorting to Eq. (9.108); to avoid appearance of the unknown quantity 0/0, one should apply l’Hôpital’s rule to the second term in parenthesis of Eq. (13.304) as b 2a 1+ b 2 2 −1 a b b 2 b 2 b b b 2 + ln −1 2 −1 + 2 + − 1 a a a a a a lim = lim = lim 2 b b b b b b 2 a 1 a 1 a 1 b b 2a −1 2 + −1 a a a a b 2 2 −1 a = 2πb2 1 +

13 306 via separate differentiation of numerator and denominator with regard to b/a – where b 2 − 1 was meanwhile removed from both denominators; replacement of b/a by a unity may now take place as per the classical theorems on limits, viz. 2

ln

b + a

b a

lim

b a

1

b a

2

−1 =

2

−1

2 12 − 1 + 2 1 2 1 1 + 1 −1 2

=

0+2 =1 2 1+0

13 307

Upon insertion of Eq. (13.307) in Eq. (13.304), one may replace Eq. (13.305) by

Analytical Geometry

ln 1 1 + lim lim b b b a 1 a 1 a

lim As, out , ell, ob = 2πb2

b a

1

b + a

b a b a

2

−1 1 = 2πb2 1 + 1 , 1

2

−1

13 308 which may be rephrased as simply lim As, out , ell, ob = 4πb2 = 4πR2

b a

1

R=b

= 4πAs, out , sph

13 309

R=b

again with the aid of Eq. (13.251); as happened with a prolate ellipsoid, one concludes on the coincidence of the outer surface area of an oblate ellipsoid, when its minor and major axes tend to each other, with that of a sphere with radius equal to said common axis. The result labeled as Eq. (13.304) has, however, been expressed in a number of alternative ways; for instance, the argument of its logarithmic function may be rewritten as

b + a

b + a

b 2 −1 = a

b a

2

2

2

2

b + a

b a

b 2 − a

b 2 −1 a

−1

−1 =

=

1

b + a

b 2 −1 a

b 2 − a

b 2 −1 a

2

,

= b − a

b + a

b 2 −1 a

b 2 −1 a

b + a

2

b 2 −1 a

=

b + a

b 2 −1 a

b − a

b 2 −1 a

13 310 where advantage was taken of the product of two conjugate binomials being equal to the difference of the squares of their terms, complemented by straightforward algebraic rearrangement. Equation (13.310) translates into the logarithmic domain as

b + ln a

b a

2

−1

b + a

b a

2

2



b a

1 = ln 2 b

a

−1 ; −1

13 311

549

550

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

reformulation of Eq. (13.304) is then possible to

2πb2

2

As, out , ell, ob = 2πb + b a

2

b a

−1

πab

2

= 2πb +

= 2πb +

1−

a b

ln

2

2

b a

2



a b + a

b a

2

b − a

b a

2

b + a

b a

2

b − a

b a

2

−1

πa2

2

b a

1 ln 2 b

ln

2

b a

b + a

−1 −1

−1 13 312 −1

−1 −1

after lumping 2πb2, ½ and the reciprocal of b/a, and multiplying both numerator and denominator of the second term by a/b. Another possibility comes at the expense of the relationship labeled as Eq. (2.592), which will read tanh−1

1−

a 1 = ln b 2 2

=

a b a 1− b

2

1−

1+ 1− b + a

1 ln 2 b − a

b a

2

b a

2

2

b b 1 a+a = ln 2 b b − a a

a b a 1− b 1−

2

2

,

13 313

−1 −1

a 2 , multiplying both numerator and denominator under the b logarithm sign by b/a, and finally taking b/a into the square root sign; insertion of Eq. (13.313) supports transformation of Eq. (13.312) to

after replacing x by

1−

2πa2 tanh− 1 As, out , ell, ob = 2πb2 + 1−

= 2πb2 1 +

a b 1−

1− a b

a b

2

2

,

2

a b

2

tanh−1

where 2πb2 was meanwhile factored out.

1−

a b

2

13 314

Analytical Geometry

For the sake of completeness, one should mention the outer area of a parallelepiped – for its practical relevance; its not being a solid of revolution precludes, however, use of Eqs. (13.224) and (13.226). The geometric nature of the said solid enforces existence of three sets of parallel faces, characterized by as many as possible combinations of the characteristic rectangle side lengths, i.e. a, b, and c; therefore, the outer surface reads Apar , out = 2ab + 2ac + 2bc,

13 315

as a result of calculating the area of a rectangle twice for every combination of subsets of two elements from {a,b,c}. Upon factoring 2 out, Eq. (13.315) becomes Aout , par = 2 ab + ac + bc ;

13 316

for a cube – characterized by a = b = c, Eq. (13.316) becomes Aout , cub = 2 aa + aa + aa = 2 a2 + a2 + a2 = 2 3a2 ,

13 317

which breaks down to Aout , cub = 6a2

13 318

as a particular case of a parallelepiped. Finally, it is instructive to revisit Eq. (13.224) as b

y x As, out = 2π

dy 2 dx dx

1+

a

b

1+

b

2

1+ a

dy dx dx

a

dy 2 dx, dx

b

following multiplication and division by

a

1+

dy dx

2

dx

13 319

L; Eq. (13.319) may be

rewritten as As, out = 2 π y L,

13 320

at the expense of Eq. (13.90) – as long as y denotes the ordinate of the centroid of curve y{x}, i.e. b

1+

y x a

y

dy 2 dx dx 13 321

b

2

1+ a

dy dx dx

After recalling Eq. (13.88), one may reformulate Eq. (13.321) to y=

L

y s ds , L ds

13 322

thus indicating that y is but the average ordinate calculated along the directrix; Eq. (13.320) has been classically known as first Pappus’ centroid theorem – in honor

551

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

of Pappus of Alexandria (c. 290–350 BCE), one of the last great Alexandrian mathematicians of Antiquity.

13.8

Volume of Revolution Solid

In the case of a solid of revolution – i.e. generated by a given surface upon full rotation around the horizontal axis, cylindrical symmetry will eventually result; by definition, its volume abides to dV =

V

dxdydz =

θ, r , x

x, y, z

V

13 323

rdθ drdx,

where polar coordinates are normally preferred to rectangular coordinates due to the aforementioned symmetry. Here rdθ represents increment in angular coordinate (equivalent to dz), dr denotes increment in radial coordinate (equivalent to dy), and dx represents increment in axial coordinate; this construction is outlined in Fig. 13.10. If the aforementioned surface is bounded by a curve described by equation y ≡ y{x}, on the x0y plane with x spanning [a,b], then Eq. (13.323) may be rewritten as b y x



r=0

θ=0

V= x=a

13 324

rdθdrdx

The kernel functionality does not explicitly involve θ (besides its differential form dθ), so Eq. (13.324) may be simplified to b 2π

V=

y x

dθ 0

13 325

rdrdx a

0

(b)

(a)

y{b}dθ y dθ y

y

552

A

0

a

x

b

0

x Figure 13.10 Graphical representation, (a) sidewise and (b) frontwise, of plane surface ( ) with overall area A, bounded by plane curve (––––) described by radial coordinate y ≡ y{x} when axial coordinate x varies from a to b, (b) undergoing rotation (___) along angular coordinate θ, of amplitude dθ spanning [0, 2π].

Analytical Geometry

with the aid of Eq. (11.249). Application of the first fundamental theorem of integral calculus to the outer integral (in θ) and the inner integral (in r) of Eq. (13.325) produces V =θ

b 2 y x

r a 2

2π 0

dx = 2π 0

b

1 2

y2 x dx,

13 326

a

and eventually V =π

b

y2 x dx

13 327

a

that entails a single integral along the x-coordinate. A possible application of the above concept pertains to a cone – obtained via full rotation of an isosceles triangle around the horizontal axis (as seen previously), with side descriptor conveyed by Eq. (13.239); insertion of this equation transforms Eq. (13.327) to Vcon = π

h1 0

h2 h2 2 − x dx 2 2h1

13 328

Recalling Eqs. (11.160) and (11.178), one may redo Eq. (13.328) to h2 h2 − x 2 2h1 Vcon = π h2 3 − 2h1 =−

2π h1 3 h2

3 h1

=−

2π h1 3 h2

h2 h2 − h1 2 2h1

3

h2 −0 2



3

0

h2 h2 − 2 2

3



h2 2

3

=

2π h1 3 h2

h2 2

3

− 03 =

2π h1 h23 , 3 h2 8 13 329

along a number of self-explanatory algebraic steps; one eventually ends up with Vcon =

π π h2 π h2 2 h1 h22 = h1 2 = h1 , 12 3 3 4 2

13 330

after lumping factors alike and performing some algebraic rearrangement. Remember that the halfwidth of the generating triangle, h2/2, corresponds to the cone radius R, so Eq. (13.330) may be rewritten as π 13 331 Vcon = R2 h1 3 – a more widespread functional form for the cone volume. Consider now a sphere – bounded by a line described by Eq. (13.107), which supports transformation of Eq. (13.327) to R

Vsph = 2π 0

2

R2 − x2 dx = 2π

R

R2 − x2 dx;

13 332

0

[0,R] covering only half of the full integration range accounts for 2 preceding the integral, in view of the underlying symmetry of y{x} with regard to the vertical axis. Decomposition of the integral in Eq. (13.332), in agreement with Eq. (11.102), unfolds

553

554

Mathematics for Enzyme Reaction Kinetics and Reactor Performance R

Vsph = 2π

R2 dx −

0

R

R

x2 dx = 2π R2 0

x3 = 2π R2 x − 0 3 R

dx −

0

R

R

x2 dx 0

R3 R3 2 = 2π R2 R − = 2π R3 − = 2π R3 3 3 3

0

13 333

together with the fundamental theorem of integral calculus and the first entry in Table 11.1; Eq. (13.333) finally gives 4 Vsph = πR3 , 3

13 334

after having factored R3 out. To calculate the volume of a prolate ellipsoid, one should insert Eq. (13.56) to obtain a

Vell, pr = 2π

b 0

x 1− a

2

2

a

dx = 2πb2

1−

0

x a

2

13 335

dx

from Eq. (13.327) – where again the range of integration spanning from 0 up to the major axis a was taken twice, thus justifying the whole integration range [−a,a] due, while taking advantage of the intrinsic symmetry of the ellipse about its vertical (minor) axis; the integral may be split again as per Eq. (11.102) to yield a

Vell, pr = 2πb2 0

dx −

a 0

x 2 dx = 2πb2 a

1

a

dx− a 0

0

x 2 x d a a

,

13 336

where multiplication and division of the kernel of the second integral by a allowed change of its integration variable from x to x/a – and, concomitantly, of integration limits from 0 and a to 0 and 1, respectively. The fundamental theorem of integral calculus may again be invoked to write

Vell, pr = 2πb2

x a x −a a 0 3

3 1

= 2πb2 a− a

13 a 2 = 2πb2 a− = 2π b2 a 3 3 3

0

13 337 based on Eq. (13.336), and eventually 4 Vell, pr = πab2 3

13 338

once factors have been collapsed. As expected, Eq. (13.338) unfolds 4 4 4 lim Vell, pr = πbb2 = πb3 = πR3 3 3 3

a b

R=b

= Vsph

R=b

13 339

in view of Eq. (13.334) – consistent with a prolate ellipsoid degenerating to a sphere, when its major axis approaches its minor axis. If the ellipsoid is generated by rotation of an ellipse about its minor axis, an oblate will be at stake; one should accordingly revisit Eq. (13.56) after exchanging a and b, i.e.

Analytical Geometry

y=a

1−

x b

2

13 340

Insertion in Eq. (13.340) prompts conversion of Eq. (13.327) to b

Vell, ob = 2π

a

1−

0

x b

2

2

b

dx = 2πa2

1−

0

x b

2

13 341

dx,

which resembles Eq. (13.335); splitting of the integral is then in order, viz. b

Vell, ob = 2πa2

b

dx− 0

0

x 2 dx = 2πa2 b

b

dx − b

0

1 0

x 2 x d b b

,

13 342

along with redefinition of the second integration variable to x/b. Upon calculation of the outstanding integrals, Eq. (13.342) yields Vell, ob = 2πa2

x 3 b x −b 3 0

1

b

= 2πa2 b− b

13 b 2 = 2πa2 b− = 2π a2 b 3 3 3

0

13 343 that is equivalent to 4 Vell, ob = πa2 b; 3

13 344

recalling (x/a)2 + (y/b)2 + (z/c)2 = 1 as general descriptor of a (three-dimensional) ellipsoid, c = a justifies a appearing twice as factor and b once – in much the same way a appeared once and b twice as factors in Eq. (13.338) when a prolate ellipsoid was at stake, on account of c = b. Once more, Eq. (13.344) degenerates to 4 4 4 lim Vell, ob = πb2 b = πb3 = πR3 a b 3 3 3

R=b

= Vsph

R=b

13 345

with the aid of Eq. (13.334), so the limiting volume of an oblate ellipsoid is a sphere when its major and minor axes tend to each other; therefore, prolate and oblate shape become indistinguishable in terms of volume when a b, see Eqs. (13.345) vis-à-vis with Eq. (13.339). The simplest solid or revolution is, however, the cylinder; upon retrieval of Eq. (13.29), one obtains Vcyl = π

h2 0

h12 dx = πh12 x

h2 0

13 346

directly from Eq. (13.327) – provided that x spans [0,h2] as per Fig. 13.3b. Equation (13.346) readily leads to Vcyl = πh12 h2

13 347

– where h1 ≡ R unfolds Vcyl = πR2 h2 as (more frequently used) alternative formula.

13 348

555

556

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

To finalize derivation of the formulae aimed at calculating volume of the set of solids with most applicability, one should address the parallelepiped – which, not being a solid of revolution, requires the most basic relationship, labeled as Eq. (13.323), viz. c

b

a

Vpar =

13 349

dxdydz; z=0 y=0 x=0

here a, b, and c denote lengths of the three distinct sides. The kernel reduces to unity, which allows the triple integral in Eq. (13.349) be expressed as the product of three simple integrals with the aid of Fubini’s (weak) theorem, according to a

Vpar =

b

dx 0

c

a

dz = x

dy 0

0

b

y

0

0

c

z ;

13 350

0

Eq. (13.350) becomes merely Vpar = abc

13 351

– and the special situation of all sides coinciding in length (i.e. c = b = a) degenerates to Vcub = a3 ,

13 352

pertaining to a cube. Note that, in the cases of practical interest, g1{x} = 0 – whereas g2{x} = y{x} is a preferred notation; hence, Eq. (13.190) may be reformulated as b

A=

13 353

y x dx a

On the other hand, the ordinate of the centroid of the surface lower bounded by the x-axis and upper bounded by y{x} is, by definition, given by b y x

y x dydx y=

0

a

13 354

b y x

dydx a

0

where integration may proceed as b

y=

a

y2 x 2

y x

1 = 2

0

b y x

y x a

b

dx

dx

y2 x dx a b

;

13 355

y x dx a

0

the integral in numerator may, in turn, be isolated as b

b

y2 x dx = 2y a

y x dx a

Insertion of Eq. (13.356) supports transformation of Eq. (13.327) to

13 356

Analytical Geometry

V = π 2y

b

y x dx,

13 357

a

whereas Eq. (13.353) permits further simplification to V = 2πyA

13 358

Therefore, the volume of a solid of revolution is equal to the area of the generating surface, A, multiplied by the distance, 2πy, traveled by its center of gravity (or centroid). This constitutes, in essence, Pappus’ centroid second theorem – applicable to volumes of solids of revolution; and with functional form identical to that of Eq. (13.320), following replacement of As,out by V, and L by A.

557

559

14 Transforms

Integral transforms, namely, Laplace’s transforms, and differential transforms, specifically Legendre’s transforms, offer a quite simple and elegant method of solving linear (or linearized) differential equations, and corresponding initial and boundary value problems. The former can be used as tool to transform a given ordinary differential equation into a simpler, subsidiary algebraic equation in an auxiliary domain, and to solve said equation by purely algebraic manipulation – and, finally, transform the said solution back to the initial physical domain. The latter has been utilized to handle differentials – as germane for a number of (related) thermodynamic state functions.

14.1 Laplace’s Transform 14.1.1

Definition

Laplace’s transform of a function in t is defined as ∞

f t

e −st f t dt

14 1

0

– and is often denoted by f s

14 2

f t

for simplicity; it transforms a function of real variable t into a function of s, a complex parameter. The reverse of said transform abides to −1

f s

f t ,

14 3

yet its calculation normally poses more problems than proceeding in the direct mode – as conveyed by Eq. (14.1) in the first place. Note, however, that manipulation in Laplace’s domain is but an intermediate step in attempts to find a solution for a differential equation in the t-domain; time is a particularly suitable independent variable thereto, as it varies from 0 to ∞, and thus matches the integration limits in Eq. (14.1). Laplace’s transform of unity reads ∞

1 0

e −st 1dt =

∞ 0

e −st dt =

e −st −s



= 0

1 −st e − e − st s 0



=

1 1 1−0 = , s s

Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

14 4

560

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

obtained via direct application of Eqs. (11.160) and (14.1); this result is plotted in the second entry of Table 14.1. Note the conversion of a step curve in the time domain to a hyperbola in Laplace’s domain; the former jumps at time 0+, while the latter is not defined at all for s ≤ 0 – while tending to the vertical axis at s = 0+, and the horizontal axis as s ∞. When a power is at stake, one obtains ∞

tn

e −st t dt n

0

e − st n t −s

=

n

1 t = − s e st

∞ 0

0

1t s e st

e − st n−1 nt dt −s

1 0n n − + − s e0 s ∞

n

=−





+ ∞



n s

e −st t

14 5



e

−st n− 1

t

dt

0

n− 1

dt

0

from Eq. (14.1), coupled to the method of integration by parts as conveyed by Eq. (11.39); one realizes that tn e st

n −1

n

∞n ∞ nt ∞ n n− 1 t t = lim = lim = = = ∞ st st ∞ t ∞ se ∞ t ∞ e ∞e s2 e st

n−2

lim t



n n −1 2t ∞ n 0 = lim n st = =0 = lim = n− 1 st ∞ t ∞s e ∞ t ∞ s e

=

,

14 6

where the sequential unknown quantities have been handled through consecutive utilization of l’Hôpital’s rule, as deemed necessary for the current value of n – so Eq. (14.5) eventually reduces to tn =



n s

e −st t

n−1

dt =

0

n s

t n− 1 ,

14 7

after recalling Eq. (14.1) again. The recursive relationship labeled as Eq. (14.7) may be applied to t n−1, thus giving rise to t n− 1 =

n−1 s



e −st t

n− 2

0

dt =

n−1 s

t n−2

14 8

upon replacement of n by n − 1; insertion of Eq. (14.8) transforms Eq. (14.7) to tn =

n n−1 s s

t n− 2

14 9

This rationale can be iterated, until reaching t =

1 s

∞ 0

e − st dt =

1 s

1

14 10

Transforms

Table 14.1 List of Laplace’s transforms of functions obtained via definition t-Domain Function, f{t}

Plot

δ{t}

– f {s}

0

s

n sn + 1 Γ p+1 sp + 1

0

sin kt

k k 2 + s2 – f {s}

f {t}

0

t

1 0

s

1/k

–1 0

t

cos kt

s

s k 2 + s2 – f {s}

1 f {t}

Trigonometric functions

0

t

f {t}

t , p real

s

– f {s}

1

tn, n integer p

0

1 s

0

Power

1

t

1 f {t}

Unit step

Plot

1 f {t}

Unit impulse

Function, f s

– f {s}

Type

s-Domain

0 –1 t

1/2k 0

k s

(Continued)

561

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Table 14.1 (Continued) t-Domain Function, f{t}

Type

s-Domain Function, f s

Plot

e−kt

1 k +s

1

0

0

t

e − 2k s

k

erfc

1/k

t

1

s

s

– f {s}

Complementary error function

Plot

– f {s}

f {t}

Exponential function

f {t}

562

0

0

t

s

or, after combining all previous equations, tn =

n n−1 s s

2 s

t =

n n −1 s s

2 1 s s

1 =

n sn

1 =

n1 n = sn s sn + 1

14 11

– where Eq. (14.4) was meanwhile taken into account. The result conveyed by Eq. (14.11) is tabulated in Table 14.1 as third entry – where the sharp increase of the power in the time domain is accompanied by a much sharper decrease in Laplace’s domain; note that Eq. (14.11) also applies to n = 0, with Eq. (14.4) being accordingly retrieved – after recalling that 0! = 1, as per Eq. (12.398). In the specific case of t raised to a noninteger exponent, say, ½, one may again resort to Eq. (14.1) to write ∞

t

e −st

14 12

tdt,

0

which will convert to ∞

t = 0

e −sξ ξ 2ξdξ = 2 2



ξ2 e −sξ dξ; 2

14 13

0

said conversion relies on definition of auxiliary variable ξ as ξ2 ,

14 14

dt = 2ξdξ

14 15

t and thus

Transforms

upon differentiation of both sides – besides lim ξ = 0

14 16

lim ξ = ∞

14 17

t

0

and t



After having rearranged the right-hand side of Eq. (14.13) as t =−

1 s



ξ −2sξe −sξ dξ 2

14 18

0

via multiplication and division by –s, coupled with splitting of ξ2, one may apply integration by parts to get t =−

2 1 ξe − sξ s

∞ 0





e − sξ dξ ; 2

14 19

0

the outstanding integral can then be rewritten as t =−

2 1 ξe − sξ s

∞ 0



1 s



e − ζ dζ , 2

14 20

0

following definition of a second auxiliary variable ζ as sξ

ζ

14 21

– since dζ = s dξ

14 22

due to Eq. (10.1), as well as lim ζ = 0

14 23

lim ζ = ∞

14 24

ξ

0

and ξ



Recalling the definition of Gauss’ error function of x as per Eq. (12.174), and the basic relationship between such function and gamma function conveyed by Eq. (12.422), one may write erf ∞

2 π



e − ζ dζ = 2

0

2 1 1 1 Γ = π=1 π2 2 π

14 25

– where Eq. (12.423) was taken into account; one may thus reformulate Eq. (14.20) to t =−

2 1 ξe − sξ s

∞ 0



1 π erf ∞ s 2

=

2 π 1 − ξe − sξ s 2 s

+ ξe − sξ

2



0

, 14 26

563

564

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with the aid of Eq. (14.25). In view of Eq. (14.6) taken for n = 1, one may further transform Eq. (14.26) to t =

π 1 1 − s 2 s ξ

ξ2 sξ2 ξ= ∞e

+ 0e −0 = ξ2 = ∞

π 1 1 − 0+0 1 s 2 s ∞

π π 1 = −0 + 0 = 3 s 2 s 2s2 upon multiplication and division by ξ; Eq. (14.27) may be rewritten as 1 1 Γ 2 2

Γ

,

14 27

3 2

= , 14 28 3 3 s2 s2 after taking Eq. (12.423) and Eq. (12.412) sequentially into account. The pattern entertained by Eq. (14.28) may actually be extended to every exponent of t, according to t =

Γ p+1 , 14 29 sp + 1 as emphasized also in the third entry of Table 14.1; note that Eq. (14.29) degenerates to Eq. (14.11) when p denotes an integer, in full agreement with Eq. (12.400). Of particular interest is an impulse in time t > 0, also known as Dirac’s pulse – defined by tp =

δ t δ t

t =0 t>0

=∞ 14 30

, =0

complemented with ∞

δ t dt = 1

14 31

0

– which is necessarily equivalent to 0+

δ t dt = 1

14 32

0

in view of Eq. (14.30), complemented by ∞ 0+

δ t dt = 0

14 33

upon combination of Eqs. (11.124), (14.31), and (14.32); insertion of Eqs. (14.30) and (14.32) transforms Eq. (14.1) to δ t

∞ 0

e −st δ t dt = e −st

0+ t=0 0

δ t dt +

∞ 0

+

e −st δ t dt = e0 1 + 0 = 1 1 = 1, 14 34

after convenient splitting of the germane integral consistent with Eq. (14.33) – and realization that e − st remains essentially equal to e0 = 1 when t varies between 0 and 0+, and

Transforms

finite when t > 0. Equation (14.34) is tabulated as the first entry of Table 14.1; despite the (infinite) singularity at t = 0 associated to Dirac’s function in the time domain as per Eq. (14.30), conversion to Laplace’s domain brings along a constant, unit pattern. In the case of the sine function, one obtains ∞

sin kt

e − st sin kt dt

0

= e − st =



− cos kt k





0

0

coskt cos kt − ke st 0 ke st







s k



cos kt k

− s e −st dt

14 35

e − st cos kt dt

0

following application of Eq. (14.1), reminding the rule of integration by parts, and further taking s/k off the kernel – where k denotes a constant. The cosine function is lower- and upper-bounded by −1 and 1, respectively – see Fig. 2.10b; hence, one realizes that −

1 cos kt 1 ≤ ≤ st , st st ke ke ke

14 36

thus implying 1 coskt 1 ≤ lim ≤ lim st = 0 st st ∞ ke t ∞ ke t ∞ ke

0 = − lim t

14 37

– which enforces, in turn, lim

t



coskt ke st

cos kt ke st

=0

14 38



with the aid of Eq. (9.121). In view of Eq. (14.38), one can simplify Eq. (14.35) to cos 0 s −0− ke0 k

sin kt =

∞ 0

1 s e −st cos kt dt = − k k



e − st cos kt dt;

14 39

0

a second integration by parts unfolds

1 s e sin t = − k k

−st

sin kt k

∞ 0



∞ 0

sin kt 1 s − s e −st dt = − k k k

sin kt ke st s + k







sin kt ke st

0

e −st sin kt dt

0

14 40 The fundamental law of trigonometry, as per Eq. (2.442), then allows one to write sin kt ke st

sin kt ± 1 − cos2 kt = lim =± ∞ ke st t ∞ ke st

lim ∞

t

1 − 2 ∞ k e2st

lim t

2

coskt , ∞ ke st

lim t

14 41

565

566

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

together with Eq. (9.108) – where insertion of Eq. (14.38) gives rise to sin kt ke st

1 − 02 = ± k2e ∞

=± ∞

1 − 0 = 0; ∞

14 42

Eq. (14.42) can then be used to simplify Eq. (14.40) to 1 s sin 0 s ∞ − st 0− 0 + e sin kt dt sin kt = − k k ke k 0 1 s 0 s ∞ −st e sin kt dt = − − + , k k k k 0 1 s2 ∞ −st 1 s2 e sin kt dt = − 2 sin kt = − 2 k k 0 k k at the expense again of Eq. (14.1). After lumping terms in right-hand sides, Eq. (14.43) becomes 1+

s2 k2

sin kt =

14 43

(sin kt) between left- and

1 k

14 44

– or, equivalently, 1 k

sin kt =



coskt

=

k + s2

14 45 s 1+ 2 k based on isolation of (sin kt), and multiplication of both numerator and denominator by k2 afterward; this constitutes the fourth entry in Table 14.1. Conversion of an oscillatory function in the time domain to a hyperbolic function in the s-domain is now apparent from inspection of this table; furthermore, f s departs from k 2 k+ s2 s = 0 = kk2 = k1, and decreases toward zero as s ∞. Laplace’s transform of cosine can be calculated in much the same way followed above for sine, viz. 2

k2

0

sin kt = st ke

sin kt − st ke ∞

s + k 0



sin kt k

e − st cos kt dt = e − st



e





0

0

− st

sin kt −s e − st dt k

sin kt dt

14 46

0

– departing from Eq. (14.1), and proceeding via integration by parts; in view of Eq. (14.42), one may simplify Eq. (14.46) to coskt = 0 −

sin 0 s + ke0 k



e − st sin kt dt =

0

s k



e − st sin kt dt

14 47

0

A further step of integration by parts unfolds cos kt =

s k

e − st −

cos kt k

s cos kt cos kt = − st k ke 0 ke st





0

s − ∞ k

∞ 0



cos kt k

−s e − st dt (14.48)

∞ 0

e − st cos kt dt ;

Transforms

in view of Eq. (14.38), one may simplify Eq. (14.48) to coskt =

s k



cos0 s −0 − 0 ke k

e − st cos kt dt =

0

s s2 − k2 k2

cos kt ,

14 49

with the aid again of Eq. (14.1). Solution of Eq. (14.49) for (cos kt) gives now rise to 1+

s2 k2

cos kt =

s , k2

14 50

or else s 2 s k coskt = 2 = k 2 + s2 s 1+ 2 k

14 51

– after isolating (cos kt), and then multiplying both numerator and denominator by k2; this result is listed as the fifth entry in Table 14.1. The oscillatory behavior in the time domain disappears again in Laplace’s domain – while giving room to a curve that d s takes off from zero; goes through a maximum described by ds k 2 + s2 = 0 or, equivalently, 2 2 2 2 s k + s − s(2s) = k − s = 0 that yields s = k, and thus k 2 + s2 s = k = k 2 +k k 2 = 2k1 ; and then approaches the horizontal axis that will eventually serve as horizontal asymptote, i.e. s 0 as s ∞. k 2 + s2 In Laplace’s domain, Gauss’ complementary error function – defined as erfc x

1−

2 π

x

e − ζ dζ = 2

0



2 π

e − ζ dζ 2

14 52

x

and stemming directly from Eq. (12.174), coupled with Eqs. (11.124) and (14.25), may be equated as erfc



k t

e −st 0

2 π

∞ k

e − ζ dζdt 2

14 53

t

with the aid of Eq. (14.1), when k t is chosen as argument; such an argument has been considered owing to its relevance for unsteady transport of mass in semi-infinite , with denoting diffusivity). Note that media (with k equal, namely, to x 2 ζ=

k t

14 54

appears as the lower limit of the inner integral in Eq. (14.53), and may be algebraically rearranged to t=

k 2 ; ζ

14 55

567

568

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

conversely, the upper limits of both integrals are coincident and infinite. The order of integration in Eq. (14.53) may be exchanged based on Eq. (11.250), provided that Eq. (14.55) is duly accounted for, according to erfc



2 π

k t

e− ζ



2

2

k ζ

0

e − st dtdζ,

14 56

– since the range of integration spanned by t, i.e. [0,∞], in Eq. (14.53) superimposes exactly on that spanned by ζ as per Eq. (14.54) (although in reverse direction), and the upper limits of integration of t and ζ coincide. The inner integral in Eq. (14.56) is easily calculated as ∞

k

erfc

t

2 = π

e

− ζ2 e

− st

−s



k ζ

0

2





=

2 πs

e− ζ

2

e −st

0

2 = πs 2 = πs

k ζ

2



− e − st

k ζ

2

exp − s

k ζ

2

exp − ζ2 + s

e− ζ

2

0 ∞

0



dζ 14 57

− e − ∞ dζ



at the expense of the fundamental theorem of integral calculus and e−∞ being nil, complemented with lumping of the exponential functions. After adding and subtracting 2ζ s kζ to the argument of the outstanding exponential function, Eq. (14.57) becomes k

erfc

t

2 = πs =

2e



0

k k exp − ζ2 −2ζ s + s ζ ζ ∞

− 2k s

πs

exp − ζ − s 0

k ζ

2

exp − 2ζ s

k dζ ζ

2

dζ 14 58

– where the resulting exponential function was meanwhile splitted in two exponential functions, ζ was dropped off numerator and denominator of the argument of one of them (thus allowing its withdrawal from the kernel), and advantage was taken of Newton’s binomial formula pertaining to the square of a difference. To calculate the remaining integral in Eq. (14.58), it is convenient to define an auxiliary integral I{s} as ∞

exp − ζ − s

I s −∞

k ζ

2

dζ,

14 59

Transforms

so that its square will read ∞

k exp − ζ − s ζ −∞

2

I s





exp − ζ − s

= −∞

−∞





exp −

= −∞



2

exp − ξ− s

dζ −∞

k ζ

ζ− s

−∞

2

exp − ξ− s k ζ

2

+ ξ− s

k ξ

k ξ

k ξ

2



2

dζdξ

14 60

2

dζdξ

– after resorting to Eq. (11.249); the last double integral in Eq. (14.60) is calculated over plane ζ0ξ, and such an integration domain stretches from −∞ to ∞ in both directions. In addition, said double integral possesses circular symmetry – arising from the interchangeability of ζ and ξ; in other words, for any circumference centered at the origin, the kernel exhibits the same value for every set of coordinates ζ − s kζ and ξ − s kξ. Therefore, if ρ2 is defined as ρ2

ζ− s

k ζ

2

+ ξ− s

k ξ

2

14 61

– corresponding to the square of the distance of point ζ − s kζ , ξ − s kξ to the origin, then Eq. (14.60) may be reformulated to 2π



0

0

I2 s =

e − ρ ρdρdθ 2

14 62

in circular coordinates; the plane at stake is again spanned in full, as elementary radial distance ρdρ is displaced along angular amplitude dθ. Since the kernel of I 2{s} can be written as the product of a (univariate) function on ρ by constant 1, and the limits of integration do not depend on the integration variables themselves, one may redo Eq. (14.62) to ∞



I2 s =

dθ 0

e −ρ ρdρ 2

14 63

0

– in view of the (weak) version of Fubini’s theorem. Calculation of the first integral transforms Eq. (14.63) to I2 s = θ





0

0

e −ρ ρdρ 2

14 64

or else I 2 s = 2π



e −ρ ρdρ; 2

14 65

0

a change of integration variable will be useful to handle the remaining integral, namely ω

ρ2 ,

14 66

569

570

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

that implies 14 67

dω = 2ρdρ – since it supports transformation of Eq. (14.65) to I2 s = π



e − ω dω

14 68

0

inasmuch as ω 0 when ρ 0, and likewise ω Eq. (11.160) to Eq. (14.68) yields I2 s = π

e −ω −1



= π e0 − e−∞ = π 1 − 0 = π,

∞ when ρ

∞. Application of

14 69

0

which is the same as stating ∞

exp − ζ − s −∞

2

k ζ

dζ =

I 2 s = π,

14 70

after taking square roots of both sides of Eq. (14.69) and recalling Eq. (14.59); since the kernel is an even function, the integral in Eq. (14.59), ranging from −∞ to ∞, may be splitted in two identical integrals, ranging from 0 to ∞, according to ∞

exp − ζ − s

2 0

2

k ζ

dζ = π,

14 71

with the aid of Eq. (11.124), which eventually yields ∞

exp − ζ − s 0

k ζ

2

dζ =

π 2

14 72

after division of both sides by 2. Insertion of Eq. (14.72) simplifies Eq. (14.58) to erfc

k

=

t

2e −2k πs

s

π , 2

14 73

which readily breaks down to erfc

k

=

t

e − 2k s

s

14 74

after cancelling out 2π between numerator and denominator – which is outlined as the last entry in Table 14.1; in this case, a rectangular hyperbola in the time domain is replaced by a distorted hyperbola in Laplace’s domain. A final Laplace’s transform of relevance pertains to the exponential function – unique in that it mimics the weight function in the kernel of Eq. (14.1), i.e. e −kt

∞ 0

e − st e −kt dt;

14 75

Transforms

after lumping the two exponential functions, Eq. (14.75) becomes e

− kt



=

e

− k +s t

e−

dt =

− k +s

0



k +s t

=−

e− k + s t k +s

0



− −

e− k + s t k +s

0

14 76

1 1 1 e0 − e−∞ = 1−0 = = k +s k +s k +s

Equation (14.76) has been included as sixth entry in Table 14.1, and is valid only when k + s > 0 – otherwise an infinite function would have resulted, and Laplace’s transform would consequently not exist. Note that the exponential behavior of the source function in the time domain was hereby transformed again to a hyperbolic pattern in Laplace’s domain.

14.1.2

Major Features

According to Eq. (14.1), Laplace’s transform is an (integral) linear operator; hence, it may be appropriately applied to a linear combination of functions, f {t} and g{t}, via constants c1 and c2, viz. ∞

c1 f t + c2 g t

c1 f t + c2 g t e − st dt;

=

14 77

0

upon decomposition of the definite integral as per Eq. (11.102), one notices transformation to ∞

c1 f t + c2 g t

= c1

f t e −st dt + c2

0



g t e − st dt

14 78

0

– where the definitions conveyed by Eq. (14.1) itself and Eq. (14.2) allow simplification to c1 f t + c2 g t

= c1 f s + c2 g s ,

14 79

extensible to any number of functions. Equation (14.77) may be applied to compute Laplace’s transform of hyperbolic sine of kt, according to e kt − e − kt 1 kt 1 −kt 1 1 e − e = = e kt − e −kt 2 2 2 2 2 14 80 1 1 − −k t −kt e − e = 2 2 after recalling its definition as per Eq. (2.472); insertion of Eq. (14.76) then yields sinh kt =

1 1 1 1 1 1 1 s + k − s− k = − − = 2 −k + s 2 k + s 2 s −k s + k 2 s− k s + k , s + k −s + k 2k k = = 2 2 = 2 s −k s + k 2 s −k s + k s −k

sinh kt =

14 81

upon lumping of terms in parenthesis, cancellation of 2 between numerator and denominator, and replacement of the product of conjugated binomials by the difference between the squares of their terms – as tabulated in the first entry of Table 14.2. The minus sign preceding k2 in denominator of Eq. (14.81) implies definition only for

571

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Table 14.2 List of Laplace’s transforms of functions obtained via associated theorems. t-Domain Plot

Function, f s

Plot

k s2 − k 2

sinh kt



f {t}

Hyperbolic functions

Function, f{t}

f {s}

Type

s-Domain

0

t

0

k

0

k

0

k

0

k

s

s s2 − k 2 –

f {t}

f {s}

cosh kt

1 0

s

2ks

t sinh kt

2

f {s}

s2 −k 2



f {t}

0

t

s

s2 + k 2

t cosh kt

s2 −k 2

2

f {s}



Time-weighed hyperbolic functions

t

f {t}

572

0

t

s

(Continued)

Transforms

Table 14.2 (Continued) t-Domain Function, f{t}

Square rootnormalized exponential

exp −

Function, f s

Plot

k t

π −2 e s

–1 2

(2ke)

ks



f {t}

t

Plot

f {s}

Type

s-Domain

2k

0

0

t

s

s ≥ k, thus resembling the constraint imposed upon k in Eq. (14.76); this justifies existence of a vertical asymptote of f s at s = k, as also apparent in Table 14.2. With regard to the derivative of a given function f {x}, one may write ∞

df t dt

e − st

0

df t dt dt

14 82

upon direct utilization of Eq. (14.1) – where integration by parts unfolds df t dt



= e −st f t

0





− s e −st f t dt =

0

f t e st





f t e st



+s 0

e −st f t dt;

0

14 83 st

assuming that f {t} grows slower than e when t becomes unbounded, i.e. f t e st



t

lim



f t =0 e st

14 84

(otherwise Laplace’s transform would not exist, in the first place), Eq. (14.83) simplifies to df t dt

=−

f t e0

0



+s

e − st f t dt

14 85

0

– where Eqs. (14.1) and (14.2) may again be retrieved to write df t dt

= sf s − f t

0

14 86

In other words, a derivative with regard to t in the time domain translates to a product by s, complemented by a (negative) additive correction in Laplace’s domain – thus conveying the chief usefulness of this concept, i.e. ability to eventually transform a differential to an algebraic equation. Laplace’s transform of hyperbolic cosine may be calculated after bearing Eq. (14.86) in mind, according to

573

574

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

1 d sinh kt 1 = k dt k

cosh kt =

d sinh kt 1 = s dt k

sinh kt −sinh kt

0

,

14 87 also with the aid of Eqs. (10.109), (10.205), (14.79), and (14.86); insertion of Eq. (14.81) permits transformation to Eq. (14.87) to cosh kt =

1 k 1 sk 1 sk s s − sinh 0 = −0 = 2 2 = 2 2 k s2 −k 2 k s2 − k 2 k s −k s −k

14 88

The result labeled as Eq. (14.88) appears as second entry in Table 14.2; note the similarity to Laplace’s transform of cos kt – except the requirement of s > k for existence of the said transform, otherwise ekte−st = e(k−s)t ∞ when t ∞. The above rationale may be extended to derivatives of higher-order, viz. ∞

d if t dt i

d if t

e −st

dt

0

14 89

dt,

i

once more at the expense of Eq. (14.1), which may be handled via integration by parts as d if t dt i

=e

−st d

i −1

dt



f t





i− 1

d i− 1 f t

i−1

dt



i−1

= dt st e

d i −1 f t

0

0

d i− 1 f t

− s e −st

− dt st e ∞

+s

e

i −1

−st d

dt

i −1

dt

0

; f t i −1

14 90

dt

0

if one hypothesizes that d i− 1 f t

d i−1 f t i −1

i− 1

dt e st

lim dt st t ∞ e

=0

14 91



in much the same way Eq. (14.84) was put forward, then Eq. (14.90) degenerates to d i− 1 f t d if t dt i

= − dt

i− 1

∞ 0

e0

e − st

+s

d i− 1 f t

0

dt

14 92

; i = 1, 2,…,n,

14 93

dt

i− 1

that is equivalent to d if t dt i

=s

d i−1 f t dt i− 1



d i−1 f t dt i− 1

0

on account of Eq. (14.1) once more. As expected, Eq. (14.86) is retrieved when i is set equal to unity in Eq. (14.93) – since d0f {t}/dt0 obviously coincides with f {t}; application of Eq. (14.93) to i = n produces, in turn, d nf t dt n

=s

d n− 1 f t dt n −1



d n −1 f t dt n−1

, 0

14 94

Transforms

and likewise d n− 1 f t dt n− 1

=s

d n −2 f t dt n−2



d n−2 f t dt n− 2

14 95 0

for i = n − 1 – which may be inserted in Eq. (14.94) to get d nf t dt n

d n−2 f t dt n− 2

=s s

= s2

d



n− 2

f t dt n −2

d n− 2 f t dt n− 2

0

n−2

f t dt n−2

0

d n−3 f t dt n− 3

0

−s

d

− −

d

d n− 1 f t dt n− 1

0

14 96

n−1

f t dt n− 1

0

By the same token, Eq. (14.93) becomes d n− 2 f t dt n− 2

=s

d n −3 f t dt n−3



14 97

for i = n − 2, which may be combined with Eq. (14.96) as d nf t dt n

d n− 3 f t dt n −3

= s2 s

= s3

d n− 3 f t dt n −3

− − s2

d n −3 f t dt n−3

−s 0

d n− 3 f t dt n −3

−s 0

d n−2 f t dt n− 2

d n −2 f t dt n−2

− 0

− 0

d n−1 f t dt n− 1

d n −1 f t dt n−1

0

0

14 98 – and this process may be carried out for lower and lower i. One will eventually reach d nf t dt n

n− 2

df t dt

= s n−1



sj j=0

d n− j− 1 f t dt n −j −1

14 99 0

as cumulative equivalent of Eq. (14.98) – where insertion of Eq. (14.86) gives rise to d nf t dt n

n− 2

= s n− 1 s

−f t

f t

0



sj j=0

d n− j−1 f t dt n− j− 1

d n−j −1 f t f t − s 0 dt n− j− 1 j=0

0

;

n− 2

=s

n

f t

−s

n− 1

14 100

j

0

in a more condensed notation, Eq. (14.100) will look like d nf t dt n

= sn f s −

n −1 j=0

sj

d n− j− 1 f t dt n −j −1

,

14 101

0

with the aid again of Eqs. (14.1) and (14.2). Note that differentiation per se in the time domain implies loss of a constant each time it is performed – thus justifying the summation in Eq. (14.101) that accounts for all such n constants, i.e. f {t}|0, dfdtt

n −1

0

, …, d dt nf−1t

0

.

575

576

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

It is instructive at this stage to remember that dt n = nt n −1 dt d2 t n d dt n d nt n−1 = n n−1 t n−2 = 2 dt dt dt dt d3 t n d d2 t n d n n− 1 t n−2 = n n −1 n −2 t n −3 = 3 dt dt 2 dt dt d nt n dt n

d d n−1 t n d n n −1 n − 2 = dt dt n− 1 dt

2t = n n− 1 n− 2

2 1=n 14 102

following iterative application of Eq. (10.29) – thus implying that dt n dt 2 n d t dt 2 d3 t n dt 3

= n0 n− 1 = 0 t=0

= n n−1 0 n−2 = 0 t =0

= n n−1 n − 2 0 n− 3 = 0

,

14 103

t =0

… n n

d t dt n

=n t=0

besides d0 t n dt 0

tn t=0

t=0

= 0n = 0

14 104

as zero-th order derivative evaluated at t = 0; the last expression in Eq. (14.102) supports d nt n = dt n

n =n

1 =

n , s

14 105

in view of Eqs. (14.4) and (14.79). On the other hand, Eq. (14.101) has it that d nt n = sn dt n

tn −

n −1 j=0

sj

d n− j− 1 t n , dt n−j −1 0

14 106

where the summation vanishes due to Eqs. (14.103) and (14.104), leaving just d nt n = sn dt n

tn ;

14 107

isolation of (tn) gives then rise to

tn =

d nt n dt n , sn

14 108

Transforms

whereas insertion of Eq. (14.105) finally yields n n n t = sn = n + 1 s s – consistent with Eq. (14.11), as expected. Integration of a function in the time domain corresponds to ∞

t

e −st

f t dt 0

14 109

t

14 110

f t dtdt 0

0

in Laplace’s domain, using Eq. (14.1) as template; integration by parts may be formulated, in this case, as ∞ ∞ t e − st e −st t f t dt f t dt − f t dt = −s −s 0 0 0 0 t

t

f t dt

f t dt −

0

=

se st

0

+

se st

1 s

14 111



e − st f t dt

0



0

with the aid of Eq. (11.295). Note that t

0

t

f t dt

f t dt

0

lim

se st

0

se st

0

t

f t dt 0

=

=

se0

0 =0 s

14 112

0

in view of Eq. (11.133), while t



t

f t dt

f t dt

0

se st

t

0

lim

se st





f t dt =

0

f t dt =

se ∞

0

14 113

=0





as long as

∞ 0

f t dt remains finite – an implicit requirement for existence of Laplace’s

transform, in much the same way Eq. (14.84) or Eq. (14.91) were set up. Based on Eqs. (14.112) and (14.113), one may redo Eq. (14.111) to t

f t dt = 0

f s , s

14 114

after recalling Eqs. (14.1) and (14.2). In other words, integration with regard to t in the time domain translates to division by s in Laplace’s domain. It is interesting to realize that k = s2 + k 2

sinkt =

sinkt −0 =

sinkt k k

t

=k 0

sinkt k

t

t

=k 0

cos kt dt 0

14 115 based on Eqs. (11.160), (14.45), and (14.79), coupled with Table 10.1, since sin kt 0 = 0; Eqs. (14.51) and (14.114) may then be invoked to obtain s t 2 2 cos kt k =ks +k = 2 2 coskt dt = k 14 116 k s s + k s 0

577

578

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

that retrieves the original expression in the left-hand side of Eq. (14.115) – thus confirming consistency between definition of Laplace’s transform as per Eq. (14.1), and Eq. (14.114). Another interesting property of Laplace’s transform becomes apparent when the original function in the time domain is multiplied by t, viz. ∞

e −st tf t dt

tf t

14 117

0

as per Eq. (14.1) – which may be rewritten as tf t



=−

− te − st f t dt = −

0 ∞

=−

0

∞ 0

d −st e f t ds

d dt = − ds

de −st f t dt ds ∞

e

− st

14 118

f t dt

0

−st

based on realization that deds = −te −st and on the rule of differentiation of an integral with constant limits of integration, see Eq. (11.295); one may rewrite Eq. (14.118) as tf t

=−

df s ds

14 119

using a more condensed notation – which stresses that differentiating a function with regard to s in Laplace’s domain coincides with multiplying its counterpart by – t in the time domain. The rule consubstantiated in Eq. (14.119) allows calculation of Laplace’s transform of t sinh kt as d d k 2s 2ks = − −k sinh kt = − = t sinh kt = − 2 2 2 2 2 2 ds ds s −k s −k s − k2 2 14 120 based on Eqs. (10.139) and (14.81); as well as of t cosh kt, viz. d d s s2 − k 2 − s 2s cosh kt = − =− 2 2 ds ds s − k s2 − k 2 2 , s2 − k 2 −2s2 s2 + k 2 =− = s2 −k 2 2 s2 −k 2 2

t cosh kt = −

14 121

based instead on Eqs. (10.138) and (14.88) – included as third and fourth entries in Table 14.2. Note that t sinh kt and t cosh kt look alike in the time domain because factor t damps the differences between hyperbolic sine and cosine precisely where they are largest (i.e. at small t); whereas (t sinh kt) and (t cosh kt) are defined again only for s > k, to guarantee that tekte−st 0 when t ∞, and evolve from a vertical asymptote at s = k down to the horizontal axis as asymptote when s ∞. Any set of properties of Laplace’s transform may be applied sequentially, and as many times as deemed necessary; a good example starts with realization that d erfc dt

k t

d dt =−

2 1− π

k t − ζ2

e

0



2 d =− π dt

k t − ζ2

e

dζ = −

0

2 − ζ2 e π

ζ=

k t

d dt

k t

3 2 k2 1 k k2 1 k − t− 2 = exp − exp − 2 t t t t π π

14 122

Transforms

stemming from Eqs. (10.106), (11.295), and (14.52); multiplication of both sides by t converts Eq. (14.123) to k2 exp − d k k t erfc t 14 123 = dt π t t After taking Laplace’s transforms of both sides, Eq. (14.123) becomes k2 k2 exp − exp − d k k k t t = = t erfc π π dt t t t

,

14 124 at the expense of Eq. (14.79); based on the property consubstantiated by Eq. (14.119), one obtains −

d ds

d erfc dt

k

=

t

exp −

k π

k2 t

14 125

t

from Eq. (14.124), where Eq. (14.86) may be invoked to further convert Eq. (14.125) to −

d s ds

erfc

k t

k

−erfc

k π

=

t

exp −

t=0

k2 t

t

14 126

In view of Eqs. (14.25) and (14.52), one finds that erfc

k

erfc x

t

t=0

2 = 1− π

x





e

= 1−

−ζ

0

2

x

2 π

e − ζ dζ 2

0

x



2 π = 1 −1 = 0 dζ = 1 − π 2

;

14 127

therefore, Eq. (14.126) breaks down to −

d s ds

erfc

k t

=

exp −

k π

k2 t

t

Insertion of Eq. (14.74) further transforms Eq. (14.128) to k2 exp − − 2k s d e k t s − = , ds s π t

14 128

14 129

where isolation of the outstanding transform and cancellation of s between numerator and denominator unfold exp − t

k2 t

=−

π d − 2k e k ds

s

;

14 130

579

580

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

upon application of Eqs. (10.33), (10.169), and (10.205), complemented with collapse of factors alike, one may redo Eq. (14.130) to exp −

k2 t

π −2k e k

=−

t

s

1 1 = 2 s

− 2k

π − 2k e s

s

14 131

Since k plays the role of dummy variable, one can replace it by Eq. (14.131) becomes exp −

k t

π −2 e s

=

t

ks

k – in which case

,

14 132

as per the last entry in Table 14.2; note the curve in time taking off from zero and attaining a maximum, followed by a decrease toward zero – which turns into an enhanced hyperbolic behavior in Laplace’s domain, with the axes serving as vertical and horizontal asymp1

totes. Said maximum can be obtained via

k

d t −2e− t dt

= 0, where the rules of differentiation of −k

a product, an exponential, and a square root enforce e − t t2k t = 2te t t; cancellation of e − t k

k

and t t between sides gives rise to kt = 12, or else t = 2k as abscissa – which produces 1 exp − 2kk an ordinate equal to = 2ke − 2 , as also emphasized in Table 14.2. 2k If a delay occurs in the time domain – mathematically equivalent to replacing t by t − k in the argument of a function f {t} with constant k > 0, then Eq. (14.1) should appear as ∞

f t −k

e −st f t −k dt;

14 133

0

Eq. (14.133) can be rewritten as f t −k



=

e − st + ks− ks f t − k dt =

0



e −ks e −s t −k f t −k dt

0

= e −ks

∞ −k

,

14 134

e −s t −k f t −k d t − k

after multiplying and dividing the kernel by eks, and splitting the result as product of two exponential functions – besides realizing that d t − k coincides with dt, as per the definition of differential and the constancy of k, so t – k – k when t 0. Since the kernel is nil within − k ≤ t −k ≤ 0 – because f t − k is obviously defined only for t − k ≥ 0 – the integral is nil as well in that interval, so Eq. (14.134) simplifies to f t −k

0

= e − ks

−k

= e − ks



e −s t −k 0d t −k +



e −s t −k f t −k d t −k

0

e −s t −k f t −k d t −k

0

14 135 since f t − k

−k < t −k < 0

= 0; Eq. (14.135) is equivalent to

Transforms ∞

= e −ks

f t −k

e −sT f T dT ,

14 136

0

provided that t −k

T

14 137

underlies a deviation time. The definition of Laplace’s transform holds irrespective of the dummy variable of integration utilized, thus allowing transformation of Eq. (14.136) to = e −ks f s

f t −k

14 138

with the aid again of Eqs. (14.1) and (14.2) – known as theorem of translation in the time domain; in other words, a translation of a function f {t} by k in the time domain corresponds to a multiplicative correction of f s by e−ks in Laplace’s domain. It is instructive at this stage to revisit Eq. (14.86) when s grows unbounded, viz. df t = lim sf s −f t 14 139 = lim sf s − f t , lim s ∞ s ∞ s ∞ 0 dt 0 in view of f {t}|0 being independent of s, and recalling the results conveyed by Eqs. (9.30), (9.86), and (9.108); on the other hand, the left-hand side of Eq. (14.139) reads s

lim





df t dt

s

lim

∞ 0

e − st



df t dt = dt

0 s

lim e −st ∞

df t dt = 0, dt

14 140

following plain application of Eq. (14.1) with t as integration variable – and retrieving the hypothesis that dfdtt grows slower with t than e st (otherwise no Laplace’s transform could

be defined), together with realization that e −st lims

df t dt



between Eqs. (14.139) and (14.140) generates

lim s f s − f t

s

∞. Elimination of

0 when s



0

= lim sf s − lim f t = 0, ∞

s

t

14 141

0

or else lim f t = lim s f s ;

t

0

s

14 142



this is known as initial value theorem – and implies that the initial value of function f {t} coincides with the value of the product of s by its Laplace’s transform when s grows unbounded. A similar argument may be used when s 0, i.e. lim

s

0

df t dt

= lim s f s − f t s 0

= lim sf s − f t

14 143

0

s

0

0

using Eq. (14.139) as template; the definition of Laplace’s transform now gives rise to lim

s 0



df t dt

lim

s 0 0 ∞

= 0

e −st

df t dt = dt

df t dt = dt

f t f t

t

t=0



∞ 0

lim e − st

s

0



df t dt = dt

df t = f t

∞ t=0

0

e0

df t dt dt

= lim f t − f t t



0

14 144

581

582

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

pertaining to the left-hand side, where the fundamental theorem of integral calculus was taken advantage of after dropping t from both numerator and denominator. Elimination of lims

0

df t dt

between Eqs. (14.143) and (14.144) is now in order, viz.

lim s f s − f t s 0

0

= lim f t − f t ∞

t

0

,

14 145

where cancellation of terms alike between sides permits simplification to t

lim f t = lim s f s ; ∞

14 146

s 0

Eq. (14.146) is usually designated by final value theorem – since it assures coincidence of the value to be eventually attained by f {t} in the time domain, with the initial value taken by s f s in Laplace’s domain.

14.1.3

Inversion

Although translation of a given function from the time domain to Laplace’s domain can be accomplished via definition of Laplace’s transform, or with the aid of any of the theorems presented previously, solution of the original problem in time will not be complete unless transformation back to the time domain is effected – in which case Eq. (14.3) will be coined as f t

−1

−1

f t

f s ;

14 147

this is conceptually possible due to the unique correspondence between f {t} and f s and vice versa, further to composition of one operator with its inverse. As happened with integration relative to differentiation, inversion of Laplace’s transforms is usually much harder to achieve than the direct process – yet a number of tools have been made available to facilitate it, as briefly described below. A first strategy lies on the translation theorem in Laplace’s domain; in fact, Laplace’s transform of the product of any function f {t} by e−kt reads ∞

e −kt f t

e −st e − kt f t dt

14 148

0

in agreement with Eq. (14.1), where grouping of exponential factors gives e −kt f t



=

e −st e − kt f t dt,

14 149

0

or else e −kt f t



=

e−

s+k t

f t dt

14 150

0

Since s is a dummy variable accompanying definition of f s as per Eqs. (14.1) and (14.2), one has it that ∞

f s+k 0

e−

s+k t

f t dt

14 151

Transforms

following exchange of s and s + k, so one may also write e − kt f t

f s+k

14 152

– in which case Eq. (14.150) will turn to −1

−1

f s+k

e −kt f t

= e − kt f t = e −kt

−1

f s

14 153

with the aid of Eq. (14.147), and upon application of the inverse Laplace’s operator to both sides; Eq. (14.153) indicates that a translation in Laplace’s domain from s to s + k applying −1 f s in the time domain. to f s implies a multiplicative correction of e−kt to f {t} One obvious application of Eq. (14.153) involves (reciprocal) powers of s + k, i.e. n s+k

−1

n+1

= e −kt

−1

n sn + 1

;

14 154

this is equivalent to n

−1

s+k

n+1

= t n e −kt ; n = 0,1, 2,…

14 155

as per Eq. (14.11) – and accounts for the first entry in Table 14.3. The function in the time domain departs from zero and increases up to a maximum defined by d(tne−kt)/dt = 0 or, equivalently, ntn−1e−kt − ktne−kt = tn−1e−kt(n − kt) = 0 – which produces t = n/k for abscissa; consequently, (n/k)nexp{−k(n/k)} = (n/k)ne−n = (n/ke)n will serve as ordinate – whereas a typical decreasing exponential results in Laplace’s domain, departing from n!/kn+1 at s = 0. A similar application of Eq. (14.153) pertains to trigonometric functions, viz. k1

−1

k12

+ s + k2

2

= e − k2 t

−1

k1 = e − k2 t sin k1 t, + s2

k12

14 156

with the aid of Eq. (14.45), associated with a decaying sine in the time domain – and similarly s + k2

−1

k12 + s + k2

2

= e − k2 t

−1

s = e − k2 t cos k1 t, k12 + s2

14 157

now at the expense of Eq. (14.51), describing a decaying cosine also in the time domain (both under the hypothesis that k1 > 0 and k2 > 0); these results are included in Table 14.3 as second and third entries, respectively. The oscillatory behavior (with decreasing amplitude) in the time domain corresponds to a damped pattern in Laplace’s domain. The latter, in the case of Eq. (14.157), goes through a maximum – obtained as d ds

s + k2 k12 + s + k2

2

= 0 that gives rise to k12 + (s + k2)2 − 2(s + k2)2 = 0; which, in turn, degen-

erates to s = k1 − k2 as abscissa, after lumping terms alike and taking square roots – while the ordinate reads 2 k1 − k2 + k2 2 , or else k 2 k+1 k 2 upon cancellation of symmetrical k1 +

k1 − k2 + k2

1

1

terms, which eventually degenerates to 2k11 upon collapsing terms alike, and dropping k1 from both numerator and denominator afterward. In the case of Eq. (14.156), the

583

584

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Table 14.3 List of Laplace’s inverse transforms of functions obtained via definition and associated theorems. s-Domain

n

0

0

e − k2 t sin k1 t k1 k12+k22

0

s + k2 2

f {s}

k12 + s + k2

0

s

1 2k1 0 k1– k2

1

0 π 2k1

e k1 t −e k2 t k1 −k2 f {s}

k2

f {t}

t

1 s−k1 s−k2 0 k1

s

1

0

t

0

t

2

1 + k1 −k2 t e k2 t 0 k1

k2

s

k1 − k2

2

f {t}

e k1 t − f {s}

s −k1 s−k2

π k1

t

e − k2 t cos k1 t

s

Combinations of exponentials

n/k t

2

f {t}

+ s + k2

(n/ke)n

s

k1 k12

f {s}

Exponentialweighed trigonometric functions

tne−kt, n integer

n!/kn+1

n+1

Plot

f {t}

s+k

Function, f{t}

Plot

f {t}

Exponentialweighed power

Function, f s

f {s}

Type

t-Domain

Transforms

s-curve decays monotonically from k1/(k12 + k22) toward zero. The points where the time axis is crossed read π/k1 for e−k2t sin k1t, and π/2k1 for e−k2t cos k1t. Another important feature pertains to the product of two functions in Laplace’s domain – and resorts to the concept of Laplace’s transform applied to convolution of two functions, f {t} and g{t}, within a given interval, [0,t], according to t



e − st

f t g t − t dt

0

t

f t g t −t dtdt

14 158

0

0

as per Eq. (14.1); after exchanging the order of integration, as allowed by Fubini’s theorem, one obtains t



f t g t − t dt =

0



f t g t − t e −st dtdt

14 159

t

0

– where equivalence between limits of integration is made apparent in Fig. 14.1. Upon inspection of Fig. 14.1a, one confirms that t in Eq. (14.158) spans interval [0,t], while t varies freely between 0 and ∞; reversal of the order of integration, as done in Eq. (14.159), supports flipping of Fig. 14.1a around the t = t line to make t vary freely between 0 and ∞, while t spans from the current value of t up to ∞ – as emphasized in Fig. 14.1b, with the (shaded) integration area remaining unchanged in nature and magnitude. Definition of an auxiliary (inner) integration variable τ is in order, viz. τ

t − t,

14 160

which is equivalent to t=t+τ

14 161

upon isolation of t, and also implies 14 162

dτ = dt

as per Eq. (10.1) – since t remains constant as far as integration in t is concerned; Eq. (14.159) may thus be rewritten as t 0

f t g t −t dt =





0

0

f t g τ e − s t + τ dτdt =





0

0

f t e −st g τ e −sτ dτdt 14 163

Figure 14.1 Representation of integration ) of double integral on (a) t0t area ( plane and (b) t0t plane.

(a) ∞ ~ t

0

~ t=t

t ∞

(b) ∞ t

0

t=t~

~t ∞

585

586

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

at the expense of Eqs. (14.160)–(14.162) and splitting of exponential functions – after realizing that τ ∞ when t ∞, and likewise τ 0 when t t as per Eq. (14.160). The double integral in Eq. (14.163) may be expressed as the product of two integrals as t



f t g t −t dt =

0

f t e − st dt

0



g τ e − sτ dτ,

14 164

0

because both f and g have meanwhile become univariate functions, and integration limits turned independent of each other; definition of Laplace’s transforms of f {t} and g{τ} (with t and τ playing the roles of dummy variables of integration), consistent with Eqs. (14.1) and (14.2), allows conversion of Eq. (14.164) to t

f t g t −t dt = f s g s ,

14 165

0

whereas application of inverse Laplace’s operator as per Eq. (14.147) to both sides yields t

−1

f s g s

=

f t g t −t dt

14 166

0

– usually known as convolution theorem. If g{t} ≡ 1, then Eq. (14.165) applies as t

t

f t dt =

f t 1dt = f s

0

f s s

1 =

0

14 167

with the aid of Eq. (14.4) – thus retrieving Eq. (14.114) that describes the outcome, in Laplace’s domain, of integration in the time domain. Once in possession of Eq. (14.166), one can handle composite translations in Laplace’s domain; for instance, one has it that −1

1 s − k1 s −k2

−1

=

1 1 = s − k1 s − k2

t

1 s− k1

−1

0

−1 t

t

1 s− k2

dt, t

t −t

14 168 where Eq. (14.155) may be invoked, for n = 0, to write −1

1 s −k1 s − k2

t

t

e k1 t

= 0

t

t

= e k2 t

e

t

e k2 t

k1 −k2 t

t

t −t

e k1 t e k2

dt =

t −t

t

dt =

0

e k1 t e k2 t e − k2 t dt

0

dt

0

14 169 – along with algebraic rearrangement of exponential functions, and realization that e k2 t not depending on t allows its being taken off the kernel; application of Eq. (11.160) then gives rise to −1

1 s −k1 s−k2

t

=e

k2 t

e k1 − k2 t e k2 t = e k1 −k2 k1 − k2 0

=

e k2 t e k1 −k2

k1 −k2 t

− e0 =

k1 −k2 t t

−e

k1 −k2 t 0

e k2 t e k1 t −e k2 t e k1 t e − k2 t − 1 = k1 −k2 k1 −k2

14 170

Transforms

Equation (14.170) has been conveyed as fourth entry in Table 14.3 – where the vertical asymptote at s = k2 is apparent in Laplace’s domain, as well as the typical exponential pattern of f {t} in the time domain; note that not only k2 > k1 > 0 was considered, which implies definition only for s > k2 – so as to assure ek2te−st = e(k2−s)t 0 0 as t ∞ and e k1 t e − st = e k1 −s t A related possibility entails one of the factors in denominator raised to a power above unity, say, 1

−1

s −k1 s − k2

2

−1

=

t

1 1 s −k1 s − k2

2

−1

=

1 s− k1

0

1

−1 t

s− k2

t

dt

2 t

t −t

14 171 as per Eq. (14.166); upon insertion of Eq. (14.155) for n = 0 or n = 1 (as appropriate), one gets −1

1 s −k1 s − k2

t 2

t

e k1 t

=

t

0 t

t

te k2 t

t

t −t

dt =

e k1 t t −t e k2

t −t

dt

0

e k1 t te k2 t e − k2 t −e k1 t te k2 t e −k2 t dt

=

,

0 t

= te k2 t

e

k1 − k2 t

dt − e k2 t

0

t

te

k1 −k2 t

dt

0

14 172 where algebraic rearrangement of the kernel meanwhile took place, followed by application of Eq. (11.102) to split the integral. The first integral of Eq. (14.172) may be directly calculated, yet the second one should undergo integration by parts according to −1

1 s− k1 s − k2

= te

2

=t

k2 t e

k 1 − k2 t

k1 − k2

e k2 t e k1 − k2

t

t

−e

k2 t

0

k1 − k2 t

e k 1 − k2 t − t k1 − k2 0

− e0 −

e k2 t k1 − k2

te

t

e k 1 − k2 t dt k − k2 0 1 k 1 − k2 t

− 0e0 −

e k 1 − k2 t k1 − k2

t

0

14 173 in agreement with Eq. (11.177) and after convenient factoring out of the reciprocal of k1− k2; further algebraic manipulation yields −1

1 s −k1 s − k2

2

=t

e k2 t e k1 t e −k2 t − 1 e k2 t e k 1 t e − k 2 t − e0 − te k1 t e − k2 t − k1 −k2 k1 −k2 k1 − k2

=t

e k1 t − e k2 t 1 e k1 t − e k2 t − te k1 t − k 1 − k2 k1 − k2 k1 − k2

te k1 t − te k2 t −te k1 t + =

=

k1 −k2

e k1 t − e k2 t k1 − k2

=

e k1 t − e k2 t − k1 − k2 te k2 t k1 − k2

2

e k1 t − 1 + k1 −k2 t e k2 t k1 − k2

2

14 174

587

588

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– included as last entry of Table 14.3. Note the (slightly) higher distortion of the plot in the s-domain relative to the case where s − k2 appeared only as such – despite the similar vertical asymptote, which corresponds to a rising exponential in the time domain, when k1 and k2 are taken as positive; once again, only s > k2 > k1 > 0 was considered, otherwise ∞ when t ∞. ek2test = e s k2 t > e s k1 t = e k1 t e st The most useful method of inversion of Laplace’s transforms is, however, Heaviside’s expansion in partial fractions – according to M

M

ai s −1

−1

f s

i

i=0 N

ai s i =

−1

i=0 K

b jsj

bN

j=0

, s−sk

14 175

mk

k =1

where degree M of the polynomial on s in numerator is typically lower than degree N of N i j the polynomial on s in denominator – so M i = 0 ai s / j = 0 bj s represents a regular rational fraction; here ai and bj denote coefficients of the polynomials in numerator and denominator, respectively, while sk denotes each of the distinct K roots of the latter (also known as poles) and mk denotes the multiplicity of said pole (with K ≤ N obviously). Upon expansion in partial fractions, Eq. (14.175) becomes −1

f s

=

−1

K mk −1 k =1 l=0

Ak , l + 1 s −sk

14 176

mk − l

using Eq. (2.205) as template; recalling the linearity of Laplace’s operator as per Eq. (14.79), one obtains −1

K mk − 1

f s

=

Ak , l + 1 k =1 l=0 K mk − 1

= k =1 l=0

1

−1

Ak , l + 1 mk − l −1

s −sk −1

mk − l

mk − l −1 s − sk

K mk − 1

f s

= k =1 l=0

Ak , l + 1 e sk t mk − l − 1

−1

14 177

mk −l

where multiplication and division by (mk – l − 1)! Equation (14.153) may now be retrieved to get −1

,

meanwhile

mk −l − 1 s mk −l

took

place.

14 178

from Eq. (14.177), whereas Eq. (14.11) finally allows one to write −1

K mk − 1

f s

= k =1 l=0

Ak , l + 1 t mk − l − 1 e sk t mk − l − 1

14 179

When sk is a complex number, of the form a + ιb, then a − ιb will also be a pole in pracj tice (otherwise the coefficients of N j = 0 bj s would not all be real numbers); hence, terms

Transforms

589

in eιb and e−ιb will appear in Eq. (14.179). Under such circumstances, one may resort to Eqs. (2.562) and (2.563) to get rid of explicit ι, while both terms will appear multiplied by eat, i.e. Ak , 1 Ak , 2 + = e at Bk , 1 cosbt + Bk , 2 sin bt , s − a + ιb s − a− ιb

14 180

with the Bk’s denoting (modified) constants dependent on Ak,1 and Ak,2. One may, in alternative, handle the corresponding two terms in lumped form, viz. Ak , 1 Ak , 2 Ak , 1 Ak , 2 + = + s − a + ιb s − a −ιb s − a −ιb s − a + ιb =

=

s − a + ιb + Ak , 2

Ak , 1

s − a − ιb

s−a − ιb 14 181

s −a + ιb

Ak , 1 + Ak , 2 s− a + ιb Ak1 − Ak , 2 s − a 2 + b2

after pooling them together as a single fraction, followed by splitting by ι-containing and -independent terms (while recalling that ι2 = −1); straightforward algebraic rearrangement then yields Ak , 1 Ak , 2 Ak , 1 + Ak , 2 s− a ι Ak , 1 − Ak , 2 b + = + 2 2 s − a + ιb s − a −ιb s− a 2 + b2 s −a + b = Bk , 1

s −a s − a 2 + b2

+ Bk , 2

14 182

b s− a 2 + b2

– as suggested by Eq. (2.211), provided that constants Bk,1 and Bk,2 are defined as Bk , 1

Ak , 1 + Ak , 2

14 183

Bk , 2

ι Ak , 1 −Ak , 2 ,

14 184

and

respectively. Inversion of Laplace’s transform in Eq. (14.182) may now proceed as −1

Ak , 1 Ak , 2 + s − a + ιb s − a −ιb

=

−1

Bk , 1

s −a s− a

2

+ b2

+ Bk , 2

b s−a 2 + b2 14 185

– where Eqs. (14.79) and (14.153) support transformation to −1

Ak , 1 Ak , 2 + s − a + ιb s − a− ιb

= Bk , 1 = Bk , 1 e

s −a

−1

s− a 2 + b2 at

−1

+ Bk , 2

s + Bk , 2 e at s 2 + b2

−1

b s− a 2 + b2

−1

b s2 + b2 14 186

;

590

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

recalling Eqs. (14.45) and (14.51), one finally obtains −1

Ak , 1 Ak , 2 + s − a + ιb s − a − ιb

= Bk , 1 e at cos bt + Bk , 2 e at sin bt

14 187

that will recover Eq. (14.180) upon factoring out of eat, as expected.

14.2

Legendre’s Transform

Generally speaking, a function expresses a relation between two parameters – an independent variable or control parameter (say, x), and a dependent value or function (say, f ); this information is encoded in the functional form f ≡ f {x}. However, it is useful under some circumstances to encode the information contained in f {x} in a different manner – as happens with Laplace’s transform, which expresses f {x} as its integral weighed by an exponential; the information in f may thus be displayed in terms of the amount contained in the function, rather than the value of the function proper. Another possibility is Legendre’s transform; it indeed provides a more convenient way of encoding the information of the function when the latter is strictly convex (i.e. d2f/dx2 never changes sign, or takes a zero value) and smooth (i.e. it possesses a sufficient number of continuous sequential derivatives) – besides being easier to measure, control, or think about df/dx instead of f itself. The aforementioned transform – named after Adrien-Marie Legendre, a French mathematician of the eighteenth century, is an involutive transformation of a univariate, realvalued convex function; an example is depicted in Fig. 14.2a. According to a corollary of Lagrange’s theorem, a monotonically increasing function is characterized by a positive derivative and vice versa – so d2f/dx2 > 0 (or likewise d2f/dx2 < 0) implies a one-toone mapping between x and df/dx; recall that f {x} is convex when d2f/dx2 is always positive (or always negative) in the interval of interest. If a point A on the curve representing f {x}, characterized by abscissa x, is arbitrarily chosen, then the corresponding ordinate will be given by f {x} – which also denotes the length of straight segment [AB] in Fig. 14.2b. The tangent to said curve, drawn at point A, is characterized by slope, say, p, and is described by analytical equation =

y = px + f ,

14 188

=

where f x denotes vertical intercept (obviously dependent on the point x chosen). Due to the injective nature of df/dx, as per d2f/dx2 keeping its sign (with concavity facing upward in the case under scrutiny, see Fig. 14.2a), for every x there is only one slope =

p{x} – and for every slope p there is only one abscissa x{p}; hence, f x may actually =

=

=

be written as f x p or, equivalently, as f f p – since x and p are actually interchangeable. Vertical straight segment [AB], defined by point A, may be extended downward in Fig. 14.2b, while a horizontal straight line may be drawn that crosses the vertical = axis at point D with ordinate f – such that they intersect at point C; consequently, [ABCD] is a right triangle, and one may obtain p as the ratio AC CD based on the definition of slope and trigonometric tangent, see Eq. (2.297). Upon inspection of Fig. 14.2b, one= indeed concludes that CD is equal to x, whereas AC equals = f + − f = f − f or, equivalently, px.

Transforms

(d)

(a)

0

= f{p}

f{x}

f

0

p

x (c)

(b)

= –f{p} A

0 p D

f{x} x

p

f

B

0

px

= –f

= f{x}

x

px

f{x}

x

C

x

Figure 14.2 Graphical representation (a,b,c) of a generic convex function, f, with independent variable, x; (b) of the tangent, with slope p, to the curve representing f at point A, with coordinates (x,f{x}), extended =

until the vertical axis is crossed at point D, with coordinates (0, f ), whereas point B is defined by coordinates =

(x,0) and point C is defined by coordinates (x, f ); (c) of the tangent to the curve at point A, after vertical =

=

translation by − f ; and (d) of Legendre’s transform of f, denoted as f , as a function of p.

If the aforementioned tangent line is translated upward – as done in Fig. 14.2c, then one may take advantage =

f x − f p = px

14 189

as seen above, with a unique correspondence between x and p; Eq. (14.189) may be readily rearranged to =

f x −px

f p

14 190

that actually serves as definition of Legendre’s transform of f {x}, i.e. =

l f x ,

f p

14 191

with regard to x – where a new parameter, p, is hereby defined as p

df dx

14 192

591

592

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

In a sense, p plays the role in Legendre’s transform that s plays in Laplace’s transform; the =

behavior of f is graphically depicted in Fig. 14.2d as a function of p – where it becomes apparent that positive and negative values are possible, for both the independent and dependent variables at stake. If one looks the other way round, a straight line of equation y = ax should be used as departing stage – with a denoting a given constant; this straight line intersects curve f {x} at (in the best case) two points, of abscissae b and c – to be obtained as solutions of f x − ax

x=b

x=c

=0

14 193

The maximum vertical distance between said straight line and curve f {x} should accordingly conform to d ax −f x = 0; b ≤ x ≤ c dx as necessary condition, which is equivalent to

14 194

df x = a; 14 195 dx hence, the maximum distance between f {x} and ax, within interval [b,c], occurs when the slope of the tangent to f{x} coincides with a – consistent with Eq. (14.192). Owing to Eqs. (14.190), (14.192), and (14.195), one concludes that f a represents the maximum value of said distance. In view of the above development, Legendre’s transform proves to be an application of the duality relationship between points and lines – and so it holds a number of useful and unique properties. For instance, if scaling is applied to a function f via constant a, then Eqs. (14.190) and (14.191) support l af x

af x − px;

upon factoring out of a, Eq. (14.196) becomes p l af x = a f x − x a or, in shorthand notation, = p l af x = a f a

14 196

14 197

14 198

again at the expense of Eq. (14.190). If scaling is instead applied to the argument of f {x}, then one finds l f ax

f ax − px

14 199

as per Eqs. (14.190) and (14.191), so multiplication and division of the second term of the right-hand side by a leads to p 14 200 l f ax = f ax − ax ; a one may retrieve the original notation as = p , l f ax = f a after recalling Eq. (14.190) again.

14 201

Transforms

With regard to translation of a function f via addition of a constant a, according to f x + a,

g x

14 202

one may subtract px from both sides to get g x −px = f x − px + a;

14 203

in condensed notation, Eq. (14.203) will appear as =

=

g p = f p +a

14 204

with the aid of Eq. (14.190) – or, due to Eqs. (14.191) and (14.202), =

l f x +a = f p +a

14 205

If, on the other hand, translation occurs in the argument and involves (in general) another variable y, viz. f x+y ,

g x

14 206

then subtraction of p(x + y) from both sides leads to g x −px −py = f x + y −p x + y ;

14 207

this is equivalent to stating that =

=

g p − py = f p ,

14 208

in view of Eq. (14.190), where Eqs. (14.191) and (14.206) may be invoked to get l f x+y

=

= f p + py,

14 209

after adding py to both sides. Remember that Legendre’s transform with regard to x requires f {x} to be convex in x, so f {x + y} should also be convex in x + y. An even more useful feature becomes available when f ≡ f {x,y} is a bivariate function, such that f is convex at least in x (i.e. Legendre’s transform of f in variable x exists); the associated total differential may accordingly be written as df =

∂f ∂f dx + dy, ∂x ∂y

14 210

in agreement with Eq. (10.6). In shorthand notation, Eq. (14.210) may be coined as df = udx + vdy

14 211

– provided that one sets u

∂f ∂x

14 212

v

∂f ∂y

14 213

and

593

594

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

In attempts to swap the independent variable x to u – while keeping y as independent variable, one should further define an auxiliary function g as f − ux;

g u,y

14 214

in view of Eq. (14.212), one may redo Eq. (14.214) to f−

g u,y

∂f x dx

14 215

Comparative inspection with Eqs. (14.190) and (14.192) permits one to reformulate Eq. (14.215) to =

f u ;

g u,y

14 216

the differential of g may be written as dg = df −xdu −udx

14 217

based on Eqs. (10.1), (10.6), (10.119), and (14.214) – where combination with Eq. (14.211) unfolds dg = udx + vdy −xdu −udx,

14 218

or else dg = vdy − xdu

14 219

upon cancellation of symmetrical terms. Inspection of Eq. (14.219) confirms that g is indeed a function of u and y as independent variables, as hypothesized in Eq. (14.214) – while the definition of total differential implies ∂g =v ∂y

14 220

∂g = −x; ∂u

14 221

and

consequently, Legendre’s transform of f with regard to the desired new independent variable u as per Eq. (14.216) led to another function g where the original independent variable x is no longer present. This result finds wide application in thermodynamics when removal of a given (state) variable is sought – because (state) variables pertaining to a single component, in the presence of a single phase, are bivariate functions. =

Finally, the issue of inverse Legendre’s transform, l − 1 f , should be called upon; starting with differentiation of both sides of Eq. (14.192) with regard to x, one obtains dp dx

d df d2 f = 2, dx dx dx

14 222

whereas differentiation of Eq. (14.190) with regard to p unfolds =

df dp

d f x −px dp

14 223

Transforms

After resorting to the classical theorems on derivative of a product of functions and derivative of a composite function, one obtains =

df df dx dx −x −p = dp dp dx dp

14 224

from Eq. (14.223) – where factoring out of dx/dp, coupled with combination with Eq. (14.192) lead to df dp

dx df dx df df −p −x = − −x ; dp dx dp dx dx

14 225

Eq. (14.225) degenerates to =

df = −x dp

14 226

– where a second differentiation of both sides with regard to p yields dx d df d2 f = − 2, = − dp dp dp dp

14 227

after having taken their negatives. Ordered multiplication of Eqs. (14.222) and (14.227) leads to =

dp dx d2 f d2 f = − 2 2, dx dp dx dp

14 228

where the derivative of the inverse of a function as given by Eq. (10.247), i.e. dx 1 = , dp dp dx

14 229

supports simplification of Eq. (14.228) to −

d2 f d2 f = 1; dx2 dp2

14 230

this is equivalent to writing =

d2 f 1 − 2= 2 dp d f dx2

14 231

Remember that f is, by hypothesis, convex, i.e. d2f/dx2 > 0 or d2f/dx2 < 0, so Eq. (14.231) =

=

= d2 f d2 f guarantees that 2 < 0 or 2 > 0, respectively – i.e. f is still a convex function; one may dp dp =

therefore proceed to calculation of Legendre’s transform of f , according to =

=

l f p

=

=f p −

df p p, dp

14 232

595

596

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

by analogy to Eqs. (14.190)–(14.192). Upon combination with Eq. (14.226), one obtains =

l f p

=

= f p + xp

14 233

from Eq. (14.232), which may be rearranged to read =

=

−px

f p =l f p

14 234

Consistency between Eqs. (14.190) and (14.234) enforces =

l f p

=f x ,

14 235

which may be rewritten as l l f x

=f x

14 236

as per Eq. (14.191); hence, a second application of Legendre’s transform undoes the first application thereof, unlike what normally happens with transforms. One may finally resort to the definition of an inverse operator to write l−1 l f x

=f x ,

14 237

where insertion of Eq. (14.191) leads to =

l−1 f p

=f x

14 238

– which resembles Eq. (14.3) in form; on the other hand, elimination of f {x} between Eqs. (14.235) and (14.238) yields =

l−1 f p

=

=l f p

, =

14 239

so Legendre’s transform of f p coincides with its inverse – a finding fully compatible with the unique result labeled as Eq. (14.236).

597

15 Solution of Differential Equations

Any equation containing differential coefficients is termed differential equation. Such equations can be divided into two major types – ordinary and partial; said classification depends on whether they encompass algebraic operations, lumped in f, on a single independent variable – and thus only ordinary derivatives appear as coefficients, i.e. f {x, y, dy/dx, d2y/dx2, …} = 0, or more than one independent variable – so partial derivatives play the role of independent coefficients, e.g. f {x, y, z, ∂z/∂x, ∂z/∂y, ∂ 2z/∂x2, ∂ 2z/∂x∂y, ∂ 2z/∂y2, …} = 0 in the bivariate case. When a differential equation contains no terms on its independent variable(s) only, it is said to be homogeneous – otherwise it is labeled as nonhomogeneous; solutions for the latter may often be obtained using the corresponding homogeneous solution as template. The order of a differential equation is the order of the highest differential coefficient contained therein; the degree of a differential equation is the power to which the highest order differential coefficient is raised when the equation is rationalized (i.e. following removal of fractional powers). The general solution of a typical partial differential equation is a combination of arbitrary functions of specific arguments; their exact form is to be determined by application of boundary (and/or initial) conditions – dependent on the nature of the system/process under scrutiny. Conversely, the general solution of an ordinary differential equation contains as many specific independent functions (each one multiplied by an arbitrary constant) as its order. The most common methods of analytical solution, via integration, will be reviewed below.

15.1 Ordinary Differential Equations A differential equation is said to be linear when it is linear in the dependent variable and its derivatives – otherwise it is said to be nonlinear. The general solution of a typical nth order ordinary differential equation possesses a specific functional form that holds n arbitrary constants – since a constant term vanishes each time differentiation is carried out; such constants are to be found using appropriate boundary (including initial) conditions. For pedagogical reasons, first-order equations will be tackled first, followed by secondorder equations – which constitute most ordinary differential equations of practical interest in process engineering. Higher order equations may appear, but only those holding a linear form will be specifically discussed. Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

598

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

15.1.1

First Order

Equations of this type may, in general, be written as dy = f x,y , dx

15 1

where f{x,y} denotes a given function of x and y; strategies to deal with selected nonlinear forms, as well as with the general linear form, are addressed next. 15.1.1.1 Nonlinear

If the bivariate function in the right-hand side of Eq. (15.1) can be written as the product of two univariate functions, i.e. f x, y

X x Y y ,

15 2

then Eq. (15.1) takes the form dy =X x Y y ; dx

15 3

separation of variables followed by integration now ensues, according to dy = X x dx Y y

15 4

– which normally leads to y expressed as an implicit function of x. If f{x,y} looks like f x, y

φ x, y , ψ x, y

15 5

with φ{x,y} and ψ{x,y} denoting homogeneous functions of the same degree n, then Eq. (10.84) allows one to write y y Φ x x f x, y = y = y xnΨ Ψ x x xnΦ

Ω

y x

15 6

– where xn was meanwhile dropped from both numerator and denominator; here Φ, Ψ , and Ω denote (univariate) functions of y/x. Under such circumstances, Eq. (15.1) will take the form dy y =Ω dx x

15 7

Equation (15.7) may be transformed via definition of a new independent variable z as y

xz x ,

15 8

which leads to dy dz =z+x dx dx

15 9

Solution of Differential Equations

upon differentiation of a product and of a composite function; elimination of dy/dx between Eqs. (15.7) and (15.9), the aid of Eq. (15.8), unfolds dz =Ω z dx or, equivalently,

15 10

z+x

x

dz = Ω z −z dx

15 11

Integration of Eq. (15.11) may now proceed using separation of variables as tool, viz. dz dx = Ω z −z x

15 12

– thus permitting z be (implicitly) obtained as a function of x; all that is then left to do is recovering the initial notation of y ≡ y{x} via Eq. (15.8). Another equation of interest is Bernoulli’s equation, which reads dy + P x y = Q x yn dx – being nonlinear when n x. After setting z

15 13 0 and n

1, with P{x} and Q{x} denoting given functions of

y1−n ,

15 14

one obtains 1

15 15

y = z1−n upon isolation of y – and, consequently, n

y n = z1− n

15 16

once both sides are raised to n; Eq. (15.16) may also appear as yn = z

1− 1 −n n −1 + 1 1 −n = z 1−n

1

= z 1− n −1 ,

15 17

following addition and subtraction of 1 to the numerator of the exponent in the power of z. The chain differentiation rule and the rule of differentiation of a power allow transformation of Eq. (15.15) to 1 dy 1 dz = z1−n − 1 ; dx 1 −n dx

15 18

insertion of Eqs. (15.15), (15.17), and (15.18) supports conversion of Eq. (15.13) to 1 1 1 1 dz z1 −n − 1 + P x z1 −n = Q x z1−n − 1 , 1−n dx

15 19

where division of both sides by z1/(1−n) unfolds 1 −1 dz z + P x = Q x z −1 1−n dx

15 20

599

600

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– or, after multiplication of both sides by (1−n)z, dz + 1−n P x z = 1−n Q x ; n dx

1

15 21

Equation (15.21) is a linear first-order ordinary differential equation, and Eq. (15.13) is so named on account of brothers Jacob and Johann Bernoulli – who occupied themselves with it between 1695 and 1697, in competition with Leibnitz; note that Eq. (15.13) promptly becomes linear and nonhomogeneous for n = 0, and thus susceptible of integration via separation of variables as per Eq. (15.3) – and n = 1 permits its integration likewise, since dy/dx would equate with just (P{x} – Q{x})y as a homogeneous differential equation.

15.1.1.2 Linear

A first-order ordinary differential equation is said to be linear when it can be written in the form dy +P x y=Q x dx

15 22

– where P{x} and Q{x} are univariate functions of x (including the possibility of holding constant values); this type of equation may be solved with an integrating factor, R{x}, through R x

dy +R x P x y=R x Q x , dx

15 23

obtained from Eq. (15.22) after multiplying both sides by R{x}. The bottom line is to choose R{x} that turns the left-hand side of Eq. (15.23) to the derivative of R{x}y, i.e. R x

dy +R x P x y dx

d dy dR R x y =R x +y ; dx dx dx

15 24

inspection of the left- and right-hand sides of Eq. (15.24) enforces R x P x y=y

dR , dx

where division of both sides by y

15 25 0 produces

dR =R x P x dx

15 26

– as y = 0 would yield a trivial solution, devoid of interest anyway. Integration of Eq. (15.26) is possible via separation of variables, according to dR = P x dx, R

15 27

which is equivalent to ln R = P x dx;

15 28

after taking exponentials of both sides, Eq. (15.28) becomes R x = exp

P x dx

15 29

Solution of Differential Equations

– so knowledge in advance of P{x}, as conveyed by Eq. (15.22), implies that R{x} is uniquely determined by Eq. (15.29). Combination of Eqs. (15.23) and (15.24) generates d R x y =R x Q x , dx susceptible to integration via separation of variables as

15 30

d R x y = R x Q x dx

15 31

– or else R x y = κ + R x Q x dx

15 32

as per Eqs. (11.16) and (11.21), with κ denoting an arbitrary constant; division of both sides by R{x} leads to κ + R x Q x dx y=

,

R x

15 33

where insertion of Eq. (15.29) finally yields κ + exp

P x dx Q x dx

y=

15 34 exp

P x dx

as general solution of Eq. (15.22). An alternative proof of the above solution considers first the homogeneous counterpart of Eq. (15.22), i.e. dy + P x y = 0, dx which may be solved via plain separation of variables as

15 35

dy = − P x dx y

15 36

in parallel to Eqs. (15.3) and (15.4); after recalling the second entry of Table 11.1, one gets ln y = κ ∗ − P x dx

15 37

where κ∗ denotes an arbitrary constant – or, after taking exponentials of both sides, ∗

y = e κ exp − P x dx

κ exp − P x dx ,

15 38 ∗

with κ denoting a (transformed) constant, i.e. coincident with e κ . Assume now that κ in Eq. (15.38) were, in turn, a function of x – so that y = κ x exp − P x dx ,

15 39

601

602

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

obtained using Eq. (15.38) as template, would streamline the true (general) solution of Eq. (15.22); its derivative with regard to x should accordingly look like dy dκ = exp − P x dx − κ x exp − P x dx P x dx dx

15 40

with the aid of Eqs. (10.205) and (11.6) – where combination with Eq. (15.39) permits simplification to dy dκ = exp − P x dx −P x y dx dx

15 41

Elimination of dy/dx between Eqs. (15.22) and (15.41) unfolds exp − P x dx

dκ −P x y = Q x −P x y, dx

15 42

where cancellation of terms alike between sides leads to just exp − P x dx

dκ =Q x ; dx

15 43

integration by separation of variables is now in order, viz. dκ = Q x exp

P x dx dx

15 44

P x dx Q x dx

15 45

that breaks down to κ x =K +

exp

– with Κ denoting now a true (integration) constant. Insertion of Eq. (15.45) converts Eq. (15.39) finally to y= K +

exp

P x dx Q x dx exp − P x dx ,

15 46

which coincides in functional form with Eq. (15.34) – as anticipated.

15.1.2

Second Order

A second-order ordinary differential equation reads, in general, d2 y dy 2 + P x, y dx = Q x, y dx

15 47

– where P{x,y} and Q{x,y} are bivariate functions of x and y; although this family of equations are often hard to solve analytically, successful routes have been developed toward solution of specific types bearing practical significance – which will be reviewed next.

Solution of Differential Equations

15.1.2.1 Nonlinear 15.1.2.1.1 Dependent Variable-free

If Q{x,y} in Eq. (15.47) is nil and P{x,y} ≡ P{x}, then one obtains merely d2 y dy 2 + P x dx = 0 dx

15 48

– where dependent variable y is missing as such, thus actually supporting a linear equation; after defining auxiliary variable p as dy , dx and consequently p

15 49

dp d dy d2 y = = 2, dx dx dx dx

15 50

one may reformulate Eq. (15.48) to dp +P x p=0 dx

15 51

Since Eq. (15.51) holds the functional form of Eq. (15.22) with Q{x} = 0 – after replacing y by p, one may follow the same strategy for its integration, viz. K1 +

exp

P x dx 0dx

p=

15 52 exp

P x dxdx

in parallel to Eq. (15.34); note that Eq. (15.52) breaks down to K1 + 0dx

K1 + K2

=

p= exp

P x dx

exp

= κ 1 exp − P x dx ,

15 53

P x dx

consistent with Eq. (15.38) – where κ1 stands for the sum of constants K1 and K2. Once in possession of Eq. (15.53), insertion of Eq. (15.49) produces dy = κ 1 exp − P x dx , dx

15 54

which may be integrated via separation of variables as done between Eqs. (15.3) and (15.4), i.e. dy = κ 1 exp − P x dx dx;

15 55

Eq. (15.55) breaks down to y = κ 2 + κ 1 exp − P x dx dx

15 56

– where two integration constants appear, i.e. κ 2 besides κ 1, as expected for a secondorder differential equation.

603

604

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

15.1.2.1.2 Independent Variable-free

When Eq. (15.47) is not explicit in x, i.e. P and Q are both univariate functions of y, one ends up with d2 y dy 2 + P y dx = Q y dx

15 57

– which is, in principle, also eligible for a closed-form solution. One may accordingly revisit the change of variable conveyed by Eq. (15.49) as 1 p = , dx dy

15 58

while realizing that d2y/dx2 may be expressed as d2 y dx2

1 dy d ; dx dx

d dy dx dx

15 59

insertion of Eqs. (15.49) and (15.58) generates d2 y p = d p dx2 dy

p

dp , dy

15 60

thus supporting transformation of the original Eq. (15.57) to p

dp +P y p=Q y , dy

15 61

together with Eq. (15.49) – i.e. a nonlinear first-order ordinary differential equation of p in y. Although a numerical method may be required to solve Eq. (15.61) due to its nonlinearity, such a constraint will be relieved when P{y} = 0 – in which case Eq. (15.61) simplifies to p

dp =Q y ; dy

15 62

integration is indeed possible via separation of variables, according to pdp = Q y dy

15 63

that breaks down to 1 2 p = K + Q y dy 2

15 64

with K denoting an arbitrary constant – or, equivalently, p2 = 2K + 2 Q y dy

15 65

after multiplying both sides by 2, which transforms to p=

κ 1 + 2 Q y dy

15 66

Solution of Differential Equations

with square roots taken of both sides, and constant κ1 replaced 2K. Once in possession of Eq. (15.66), one should go back to Eq. (15.49) and write dy = dx

κ1 + 2 Q y dy,

15 67

where a second integration by separation of variables unfolds dy

= dx;

15 68

κ 1 + 2 Q y dy y{x} will finally be found, in implicit form, as dy

x = κ2 +

,

15 69

κ1 + 2 Q y dy with a second integration constant, κ 2, eventually emerging (as expected). Another situation where Eq. (15.61) is susceptible of analytical solution corresponds to Q{y} = 0, thus allowing simplification thereof to p

dp + P y p = 0; dy

division of both sides by p gives rise to dp + P y = 0, dy which yields dp = − P y dy

15 70

15 71

15 72

as solution – or else p = κ 1 − P y dy,

15 73

with κ 1 denoting a constant. One is now ready to retrieve Eq. (15.49) to write dy = κ 1 − P y dy dx

15 74

based on Eq. (15.73), which readily yields dx =

dy

15 75

κ 1 − P y dy following separation of variables; Eq. (15.75) finally gives x = κ2 +

dy

15 76

κ 1 − P y dy

as solution – once again bearing two arbitrary constants, while conveying y as implicit function of x.

605

606

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

15.1.2.1.3 Hartman and Grobman’s Theorem

Consider a set of two first-order nonlinear ordinary differential equations of the form dy1 = f1 y1 , y2 dx

15 77

dy2 = f2 y1 , y2 , dx

15 78

and

as often found in process engineering – where f1 and f2 denote functions of both dependent variables, y1 and y2, but not explicit on the independent variable, x. Suppose y1 can (hypothetically) be expressed in Eq. (15.77), as y1 = φ1 y2 ,

dy1 = φ1 y2 , f1 y1 , y2 dx

,

15 79

with the aid of Eq. (15.77) itself; a further step of explicitation then supports y1 = φ2 y2 ,

15 80

with φ1 ≡ φ1{y1, y2} and φ2 ≡ φ2{y2} representing bi- and univariate functions, respectively. One may now proceed to Eq. (15.78), and differentiate it once more with regard to x to get d 2 y2 ∂f2 dy1 ∂f2 dy2 + , = dx2 ∂y1 dx ∂y2 dx

15 81

via the chain partial differentiation rule. Combination of Eq. (15.81) with Eq. (15.77) unfolds d 2 y2 ∂f2 ∂f2 dy2 , = f1 + ∂y1 ∂y2 dx dx2

15 82

where any remaining dependencies on y1 – directly via f1{y1,y2} or indirectly via ∂f2/∂y1 or ∂f2/dy2, may be transformed in dependencies on y2 via insertion of Eq. (15.80). One will therefore obtain d 2 y2 dy2 2 − ψ 1 y2 dx −ψ 2 y2 = 0 dx

15 83

from Eq. (15.82), which exhibits the form of a second-order nonlinear differential equation of y2 on x – with ψ 1 and ψ 2 representing univariate functionalities of y2; therefore, the set of Eqs. (15.77) and (15.78) may be viewed as a second-order differential equation in parametric form – knowing that y1 is accessible via Eq. (15.80). An equilibrium solution (also known as critical or stationary point) of the set of Eqs. (15.77) and (15.78) – defined by coordinate xst or equivalently coordinates (y1,st , y2,st), should accordingly satisfy dy1 dx

xst

dy1 dx

y1, st , y2, st

dy2 dx

y1, st , y2, st

=0

15 84

= 0;

15 85

coupled with dy2 dx

xst

Solution of Differential Equations

in view of Eqs. (15.77) and (15.78), one may rewrite Eqs. (15.84) and (15.85) as f1 y1 ,y2

y1, st , y2, st

=0

15 86

= 0,

15 87

and f2 y1 ,y2

y1, st , y2, st

respectively. Existence of such a critical point is not guaranteed a priori, nor is the actual number of critical points (for that matter). On the hypothesis that at least one critical point exists, one may (for convenience) redefine deviation (dependent) variables as yi

yi −yi, st ; i = 1,2,

15 88

thus implying dy1 dy1 = = f1 y1 , y2 dx dx

15 89

dy2 dy2 = = f2 y1 , y2 , dx dx

15 90

and

stemming from Eqs. (15.77) and (15.78), respectively – since y1,st and y2,st are constants; Eqs. (15.89) and (15.90) may then be reformulated to dy1 = f 1 y1 ,y2 dx

15 91

dy2 = f 2 y1 ,y2 dx

15 92

and

– provided that f 1 y1 ,y2

f 1 y1 − y1, st , y2 −y2, st = f1 y1 ,y2

15 93

f 2 y1 ,y2

f 2 y1 − y1, st , y2 − y2, st = f2 y1 , y2 ,

15 94

and

with the aid of Eqs. (15.88)–(15.90). Selected examples are provided in Fig. 15.1, in the form of a so-called phase portrait – i.e. variation of y2 as a function of y1 (where x plays merely the role of parameter); critical points of distinct nature are considered therein. The curves in this figure are all described by a set of equations of the type of Eqs. (15.91) and (15.92), yet each one may abide to a distint set of boundary conditions. Except in the cases labeled as Fig. 15.1b and e, the curves converge to, or diverge from the critical point; for such singularities, the trajectories approach said critical point down to a finite (non-nil) distance and then move away – either periodically as in Fig. 15.1e, or just once as in Fig. 15.1b.

607

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Proper node

y^2

(b)

y^2

(a)

y^1

Saddle point

Star point

y^1

y^2

(d)

y^2

(c)

Improper node

y^1

y^1

(f)

y^2

(e)

y^2

608

y^1

Center point

y^1

Spiral point

Figure 15.1 Phase portraits, as y 2 -vs.-y 1 , of selected sets of two nonlinear first-order ordinary differential equations exhibiting critical points, namely, (a) proper node, (b) saddle point, (c) star point, (d) improper node, (e) center point, and (f ) spiral point.

Numerical integration of Eqs. (15.91) and (15.92) is often cumbersome, yet the most relevant regions are normally those in the vicinity of the critical point(s); furthermore, Philip Hartman in 1960, and David M. Grobman independently in 1962 proved that the patterns followed by y1 and y2 in the vicinity of stationary point (y1,st , y2,st) are driven

Solution of Differential Equations

by their linearized counterparts – in what constitutes the essence of Hartman and Grobman’s theorem. In mathematical terms, this can be stated as dy =A y dx st

15 95 x

xst

– where y denotes column vector of dependent variables (y1 and y2 in this case) and A denotes matrix of linearized differential coefficients (or Jacobian), see Eq. (10.481); this theorem holds when the eigenvalues of A have significant real parts with the same sign. The result conveyed by Eq. (15.95) is somehow expected if one expands f 1 and f 2 via Taylor’s series around the stationary point (0,0); in the present case, such an expansion will look like f 1 y1 ,y2 = f 1

+

0, 0

∂f 1 ∂y1

2

+

0, 0

∂f 1 ∂y2

y2 + 0, 0

2

∂ f1 ∂y2 ∂y1

∂f 1 ∂y1

=

y1 +

0, 0

y1 + 0, 0

y2 y1 ∂ f 1 + 2 ∂y2 2 ∂f 1 ∂y2

y2 + 0, 0

∂2 f 1 ∂y1 2

0, 0

y1 2 ∂2f 1 + 2 ∂y1 ∂y2

y1 y2 2

0, 0

2

0, 0

y2 + 2

1 ∂2f 1 2 ∂y1 2

, y1 2 +

0, 0

∂2 f 1 ∂y1 ∂y2

y1 y2 + 0, 0

1 ∂2 f 1 2 ∂y2 2

y2 2 + 0, 0

15 96 for y1 , after taking Eqs. (10.65), (15.86) and (15.93) on board, and likewise f 2 y1 , y2 = f 2

0, 0

+

∂2 f 2 + ∂y2 ∂y1 =

∂f 2 ∂y1

∂f 2 ∂y1

0, 0

y1 + 0, 0

y1 + 0, 0

∂f 2 ∂y2

0, 0

y2 y1 ∂ 2 f 2 + 2 ∂y2 2

0, 0

∂f 2 ∂y2

1 ∂2 f 2 2 ∂y1 2

y2 + 0, 0

∂2f 2 ∂y1 2

y2 +

0, 0

y1 2 ∂2 f 2 + 2 ∂y1 ∂y2

0, 0

y1 y2 2

y2 2 + 2 y1 2 + 0, 0

∂2 f 2 ∂y1 ∂y2

y1 y2 + 0, 0

1 ∂2 f 2 2 ∂y2 2

y22 + 0, 0

15 97 in the case of f 2 , again taking advantage of f 2

0, 0

being nil as per Eqs. (15.87) and (15.94)

when y1 =y1,st and y2 = y2,st (or y1 = y2 = 0) – and invoking Schwarz’s theorem. Close to the stationary point, the distance between (y1,y2) and (y1,st , y2,st) is obviously small, so y21 y1 , y1 y2 y1 ,y2 , and y22 y2 – and a similar rationale may be applied to higher powers of y1 and y2 (including cross products); therefore, Eq. (15.96) reduces to f 1 y1 ,y2 ≈

∂f 1 ∂y1

y1 + 0, 0

∂f 1 ∂y2

y2 ,

15 98

0, 0

and Eq. (15.97) similarly degenerates to f 2 y1 ,y2 ≈

∂f 2 ∂y1

y1 + 0, 0

∂f 2 ∂y2

y2 0, 0

15 99

609

610

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in the said vicinity. The associated Jacobian matrix evaluated at y1 = y2 = 0, i.e. A|(0,0), accordingly reads

A

∂f 1 ∂y1 0, 0

∂f 2 ∂y1

0, 0

0, 0

∂f 1 ∂y2 ∂f 2 ∂y2

0, 0

a1, 1 a1, 2 a2, 1 a2, 2

,

15 100

0, 0

where constants ai,j (i = 1,2; j = 1,2) have been introduced to facilitate algebraic manipulation hereafter; integration of Eq. (15.95), with the aid of Eq. (1.100), leads to the curves plotted in Fig. 15.2 – which are but the linearized counterparts of the curves shown in Fig. 15.1. The designation of the type of critical point is better understood by inspection of Fig. 15.2 than Fig. 15.1, owing to the simpler graphical patterns at stake; the latter are, in a sense, distorted versions of the former. It is apparent that the curves of Fig. 15.1 in the vicinity of the critical point essentially coincide with the curves described by the solution of Eq. (15.95) and plotted in Fig. 15.2 – except those that cannot approach said critical point sufficiently (see Fig. 15.2b and e); this observation is in line with Hartman and Grobman’s theorem. The nature of the trajectories depicted in Fig. 15.2 reflects the type of eigenvalues, λ, of matrix A – abiding to A −λI = 0

15 101

in parallel to Eq. (6.184), and equivalent to a1, 1 −λ

a1, 2

a2, 1

a2, 2 − λ

=0

15 102

in agreement with Eqs. (6.190)–(6.192) and (15.100); expansion of the determinant as per Eq. (1.10) unfolds a1, 1 − λ a2, 2 − λ − a1, 2 a2, 1 = 0,

15 103

where algebraic rearrangement gives rise to a1, 1 a2, 2 − a1, 1 λ− a2, 2 λ + λ2 −a1, 2 a2, 1 = λ2 − a1, 1 + a2, 2 λ + a1, 1 a2, 2 −a1, 2 a2, 1 = 0 15 104 Equation (15.104) may be reformulated as λ2 − λtrA + A = 0,

15 105

as a particular case of Eq. (6.201) for n = 2 – in view of the definition of trace and determinant of matrix A, as given by Eq. (15.100); being a quadratic equation in λ, Eq. (15.105) holds λ1 , λ 2 =

trA ±

tr 2 A −4 A 2

15 106

as solutions, consistent with Eqs. (7.83) and (7.93). The sign of the binomial under the root sign determines whether λ1 and λ2 are real or complex numbers, and the sign of their real part determines whether the associated critical point is stable or unstable – in agreement with Table 15.1.

Solution of Differential Equations

(b)

y^2

y^2

(a)

Proper node

y^1

(d)

y^2

Star point

y^1

y^2

(c)

Saddle point

y^1

(e)

Improper node

y^1

(f)

Center point

y^1

y^2

y^2

Spiral point

y^1

Figure 15.2 Phase portraits, as y 2 -vs.-y 1 , of selected sets of two linear first-order ordinary differential equations exhibiting critical points, namely, (a) proper node, (b) saddle point, (c) star point, (d) improper node, (e) center point, and (f ) spiral point.

611

612

Mathematics for Enzyme Reaction Kinetics and Reactor Performance trA ±

tr 2 A −4 A

The signs of tr2 A − 4 A and constrain the behavior of the y2 -vs.-y1 2 curves; the pattern in the vicinity of the critical point is of particular interest for a number of practical applications in process engineering – irrespective of whether the lines converge to, or diverge from said critical point. When λ1 and λ2 as per Eq. (15.106) are both real, distinct from each other and sharing the same sign, then the critical point is a proper node; the trajectories either move toward it – as sketched in Fig. 15.2a, when λ1,λ2 < 0 – or away therefrom when λ1, λ2 > 0. The trajectories associated with the eigenvectors are straight lines; the other trajectories are driven roughly by the direction of the eigenvector corresponding to the eigenvalue bearing the smaller absolute value, when in the vicinity of the critical point – but bend toward the direction of the eigenvector corresponding to the eigenvalue with the larger absolute value farther away from said critical point. This type of critical point is called a proper node; it is (asymptotically) stable when λ1 and λ2 are negative, and unstable when λ1 and λ2 are both positive. When λ1 and λ2 are both real but of opposite signs, the trajectories associated with the eigenvectors of the negative eigenvalue initially start at infinite-distance away, move toward, and eventually converge to the critical point (see downward vertical trajectory in Fig. 15.2b); conversely, the trajectories associated with the eigenvectors of the positive eigenvalue start at the critical point and then diverge to infinite-distance out (see rightward horizontal trajectory in Fig. 15.2b). Hence, every trajectory starts at infinite-distance away, moves toward, but never converges to the critical point – before changing direction and moving back to infinite-distance away, as illustrated in Fig. 15.2b; the critical point at Table 15.1 Nature of critical points, and qualitative features of eigenvalues and eigenvectors of the associated Jacobian matrix, A, evaluated at the said critical point, of a set of two first-order ordinary differential equations. Discriminant binomial

Eigenvalues

tr 2A − 4 A

tr A−

+





+

+



+

tr 2 A− 4 A 2

Critical point tr A +

tr 2 A −4 A 2

Eigenvectors

Type

Independent

Proper node

Stability

Stable Unstable

Saddle point

Unstable

Independent

Star point

Stable

Dependent

Improper node

Stable

Independent

Star point

Unstable

Dependent

Improper node

Unstable

Center point

Stable

Spiral point

Stable

Real part of eigenvalue(s)

tr A 2 0

– +



0 – +

Independent

Unstable

Solution of Differential Equations

stake is a saddle point and is always unstable – so Hartman and Grobman’s theorem is not applicable. When λ1 and λ2 coincide (i.e. tr2A − 4 A is nil), along with two independent eigenvectors, then every solution traces a straight line trajectory in the direction of the combination of eigenvectors set by the specific boundary conditions at stake; a distinct starburst shape thus results. The trajectories either move directly away from the critical point to infinite distance when tr A > 0, or move directly toward and eventually converge to the critical point when tr A < 0, as depicted in Fig. 15.2c. This type of critical point is termed a star point – being a particular case of a proper node, and unstable or (asymptotically) stable when tr A is positive or negative, respectively; it occurs only when A can be expressed as a multiple of the identity matrix of order 2. When λ1 and λ2 coincide but there is only one independent eigenvector, the resulting phase portrait appears as a degenerate-looking node – i.e. a hybrid between a node and a spiral point; all trajectories either diverge away from the critical point to infinite distance when tr A is positive, or converge to the critical point when tr A is negative – as outlined in Fig. 15.2d. An improper node is therefore said to exist – unstable for tr A > 0, or (asymptotically) stable otherwise, see Table 15.1. In the case of pure imaginary eigenvalues, the trajectories neither converge to the critical point nor move away to infinite distance; they instead stay in constant elliptical orbits (see Fig. 15.2e) – so said critical point is called a center and is (neutrally, but not asymptotically) stable, with Hartman and Grobman’s theorem failing again to apply. The final situation pertains to complex conjugate eigenvalues – with trajectories retaining the aforementioned elliptical traces; however, their distances from the critical point grow or decay (see Fig. 15.2f ) exponentially, depending on whether tr A is positive or negative, respectively. This implies a spiral point playing the role of critical point – which is unstable when trajectories spiral to infinite-distance away or (asymptotically) stable when trajectories spiral toward the critical point, and eventually converge thereto. 15.1.2.2 Linear

The general form of a linear second-order ordinary differential equation is often written as d2 y dy 2 + P x dx + Q x y = S x , dx

15 107

so as to keep a logical decreasing order of derivatives in the left-hand side; here P{x}, Q{x}, and S{x} denote univariate functions of x, with Eq. (15.48) materializing a particular case, i.e. Q{x} = S{x} = 0. Its most general solution takes the form y x = A1 y1 x + A2 y2 x + Y x ,

15 108

where A1 and A2 denote arbitrary constants, while y1{x} and y2{x} denote independent solutions of the associated homogeneous equation, i.e. d2 y dy 2 + P x dx + Q x y = 0; dx

15 109

Y{x} denotes, in turn, the particular integral of Eq. (15.107) itself, which takes S{x} also into account. To calculate Y{x}, it is useful to recall the method of variation of constant κ

613

614

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

as κ ≡ κ{x}, leading from Eqs. (15.38) to (15.39) in the case of a linear first-order differential equation – and extend said concept by hypothesizing that Y x = f1 x y1 x + f2 x y2 x

15 110

– where constants A1 and A2 in Eq. (15.108) have been replaced by functions f1{x} and f2{x}, respectively, so as to abide to existence of S{x} in Eq. (15.107) vis-à-vis with the simpler version labeled as Eq. (15.109). The first-order derivative of Y{x} with regard to x reads dY df 1 dy df2 dy = y 1 + f1 1 + y2 + f2 2 , dx dx dx dx dx

15 111

stemming directly from Eq. (15.110); the second-order derivative yields d 2 Y d 2 f1 df1 dy1 df1 dy1 d2 y1 d 2 f2 df2 dy2 df2 dy2 d 2 y2 + + f + + f = y + + y + , 1 1 2 2 dx dx dx dx dx dx dx dx dx2 dx2 dx2 dx2 dx2 15 112 based now on differentiation of Eq. (15.111) with regard to x – or, equivalently, d 2 Y d 2 f1 df1 dy1 d 2 y1 d2 f2 df2 dy2 d 2 y2 + f + f = y + 2 + y + 2 1 1 2 2 dx dx dx dx dx2 dx2 dx2 dx2 dx2

15 113

after pooling together terms alike. Insertion of Eqs. (15.110), (15.111), and (15.113) transforms Eq. (15.107) to d 2 f1 df1 dy1 d 2 y1 d2 f2 df2 dy2 d 2 y2 + f + f y + 2 + y + 2 1 1 2 2 dx dx dx dx dx2 dx2 dx2 dx2 , df1 dy1 df2 dy2 + Q x f1 y1 + f2 y2 = S x y1 + f1 + y2 + f2 +P x dx dx dx dx

15 114

where similar terms may be combined to get d 2 f1 dy1 df1 d 2 y1 dy1 y + P x + + Q x y1 f1 + 2 +P x 1 2 2 dx dx dx dx dx ; d 2 f2 dy2 df2 d 2 y2 dy2 + P x y2 + + Q x y2 f2 = S x + y2 2 + 2 +P x dx dx dx dx dx2 15 115

y1

y1 and y2 are, by hypothesis, solutions to Eq. (15.109), meaning that Eq. (15.115) simplifies to y1

d 2 f1 dy1 df1 d 2 f2 dy df2 y + P x + y =S x + 2 + 2 2 + P x y2 1 2 2 2 dx dx dx dx dx dx

15 116

in view of the nil contents of its second and fourth parentheses. Equation (15.116) conveys one condition to be satisfied by f1{x} and f2{x} in Eq. (15.110) – which appear only in the form df1/dx and df2/dx, as well as d2f1/dx2 ≡ d(df1/dx)/dx and d2f2/dx2 ≡ d(df2/dx)/dx. A second condition to be satisfied by f1{x} and f2{x} is still required for full specification of said functions, and may be conveniently coined as y1

df1 df2 + y2 =0 dx dx

15 117

Solution of Differential Equations

– applying to every x, and containing only df1/dx and df2/dx; Eq. (15.117) implies dy1 df1 d 2 f1 dy df d 2 f2 + y1 2 + 2 2 + y2 2 = 0 dx dx dx dx dx dx

15 118

upon differentiation with regard to x, or else y1

d 2 f1 d 2 f2 dy1 df1 dy2 df2 + y 2 2 2 = − dx dx + dx dx dx dx

15 119

following isolation of the second-order derivatives. After elimination of parentheses, Eq. (15.116) becomes, in turn, d 2 f1 dy1 df1 df1 d 2 f2 dy2 df2 df2 y + P x + y + 2 1 2 2 2 + 2 dx dx + P x y2 dx = S x , dx dx dx dx dx which may be algebraically reorganized to y1

P x

y1

15 120

df1 df2 dy1 df1 dy2 df2 d 2 f1 d 2 f2 +2 + y1 2 + y2 2 = S x + y2 + dx dx dx dx dx dx dx dx 15 121

– upon factoring out 2 or P{x} (as appropriate), and bringing the second-order derivatives together; in view of Eqs. (15.117) and (15.119), one obtains 2

dy1 df1 dy2 df2 dy1 df1 dy2 df2 − =S x + + dx dx dx dx dx dx dx dx

15 122

from Eq. (15.121) or merely dy1 df1 dy2 df2 15 123 + =S x dx dx dx dx It is noteworthy that Eqs. (15.117) and (15.123) mimic Eqs. (10.462) and (10.464), should κ 1 and κ 2 be replaced by df1/dx and df2/dx, respectively, and f and g by y1 and y2, respectively; the rationale underlying linear dependence between functions based on the Wronskian matrix as per Eq. (10.468) has implicitly been used as a basis – since f1{x} dependent on f2{x} would prevent Eq. (15.110) from conveying a solution independent of an algebraic combination of y1{x} and y2{x}. Application of Cramer’s rule to Eqs. (15.117) and (15.123) permits calculation of df1/dx and df2/dx as 0

y2 dy2 S x df1 dx = dx y1 y2 dy1 dy2 dx dx

15 124

and y1 dy1 df2 dx = dx y1 dy1 dx

0 S x y2 dy2 dx

,

15 125

615

616

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

respectively; in view of the rule of calculation of a second-order determinant, Eq. (15.124) becomes df1 y2 S x , =− 15 126 dy dy dx y1 2 −y2 1 dx dx and Eq. (15.125) similarly transforms to df2 y1 S x = 15 127 dy dy dx y1 2 − y2 1 dx dx Integration via separation of variables is now in order, viz. y2 S x dx dy dy y1 2 − y2 1 dx dx stemming from Eq. (15.126) or, equivalently, df1 = −

f1 x = B 1 −

15 128

y2 S x dx dy2 dy − y2 1 y1 dx dx

15 129

– with B1 denoting an arbitrary constant; and likewise y1 S x dx dy dy y1 2 −y2 1 dx dx departing from Eq. (15.127), or else df2 =

f2 x = B 2 +

15 130

y1 S x dx dy dy y1 2 −y2 1 dx dx

15 131

– with B2 denoting another arbitrary constant. The foregoing integrations are possible because y1 and y2, and thus dy1/dx and dy2/dx, as well as S are all functions of x that, by hypothesis, are known in advance. In addition, y1{x} and y2{x} being independent solutions of Eq. (15.109) guarantees that the Wronskian determinant, y1 y2 dy2 dy −y2 1 , appearing in the denominator of both Eqs. (15.129) and dy1 dy2 = y1 dx dx dx dx (15.131), is not nil. Following combination of Eqs. (15.108) and (15.110), one obtains y = A1 y1 + A2 y2 + f1 y1 + f2 y2

15 132

as general solution to Eq. (15.107); y1 and y2 may then be factored out to produce y = A1 + f1 y1 + A2 + f2 y2

15 133

After retrieving Eqs. (15.129) and (15.131), one can transform Eq. (15.133) to y=

A1 + B1 −

y2 S dx y1 + dy2 dy y1 − y2 1 dx dx

A2 + B2 +

y1 S dx y2 , dy2 dy y1 − y2 1 dx dx 15 134

Solution of Differential Equations

which becomes y=

κ1 −

y2 Sdx y1 + dy2 dy − y2 1 y1 dx dx

κ2 +

y1 Sdx y2 dy2 dy − y2 1 y1 dx dx

15 135

y1 Sdx y2 Sdx −y dy2 dy1 1 dy2 dy −y2 −y2 1 y1 y1 dx dx dx dx to serve as the (complete) solution under scrutiny – provided that constants κ 1 and κ2 are defined by = κ1 y1 + κ2 y2 + y2

κ1

A1 + B1

15 136

κ2

A2 + B2 ,

15 137

and respectively; therefore, the most general solution to Eq. (15.107), containing two independent functions (and thus two arbitrary constants), was obtained at the expense of the solution of (simpler) Eq. (15.109) – containing only functions y1{x} and y2{x}, with Y{x} coinciding with the sum of the last two terms in Eq. (15.135). In the particular case of both P{x} and Q{x} being constant, say a1 and a0, respectively, and S{x} = 0, then a much simpler version of Eq. (15.107) results, viz. d2 y dy 2 + a1 dx + a0 y = 0 dx

15 138

In attempts to solve Eq. (15.138), one may to advantage recall its first-order counterpart, i.e. dy + a0 y = 0 15 139 dx stemming from Eq. (15.22), after setting P{x} ≡ a0 and Q{x} ≡ 0; Eq. (15.139) holds y = κ exp − a0 dx

15 140

for solution, using Eq. (15.38) as template, upon replacement of P{x} by a0 – where (trivial) integration as indicated unfolds y = κ exp −a0 x

15 141

Since Eq. (15.138) contains a second-order differential term in addition to Eq. (15.139) – and since the derivative of the exponential function coincides with itself, it is reasonable to postulate a solution for Eq. (15.138) of the form y = κ exp λx ;

15 142

here κ denotes an integration constant, while λ ≡ λ{a0, a1} is a parameter still to be determined. Differentiation with regard to x turns Eq. (15.142) to dy = κλ exp λx , dx and another step of differentiation generates, in turn,

15 143

617

618

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

d2 y = κλ2 exp λx ; dx2 insertion of Eqs. (15.142)–(15.144) transforms Eq. (15.138) to κλ2 exp λx + a1 κλ exp λx + a0 κ exp λx = 0

15 144

15 145

After division of both sides by κ exp{λx}, Eq. (15.145) breaks down to λ2 + a1 λ + a0 = 0,

15 146

known as characteristic equation – which provides an algebraic tool to calculate λ. Upon application of the solving formula of a quadratic equation, one gets a12 − 4a0 −a1 + a12 − 4a0 λ2 = 15 147 2 2 as the two solutions of Eq. (15.146), thus allowing transformation of Eq. (15.142) to λ1 =

− a1 −

y = κ1 exp λ1 x + κ 2 exp λ2 x

15 148

as linear combination of the two possibilities for λ – and already bearing two arbitrary constants, κ 1 and κ 2; Eq. (15.148) may instead appear as y = κ1 y1 x + κ2 y2 x ,

15 149

provided that functions y1{x} and y2{x} abide to y1 x

exp λ1 x

15 150

y2 x

exp λ2 x

15 151

and

The functions y1{x} and y2{x} are supposed to be linearly independent – otherwise Eq. (15.149) would not convey the most general solution; to confirm said hypothesis, one should set the right-hand side of Eq. (15.148) equal to zero, viz. κ1 exp λ1 x + κ 2 exp λ2 x = 0,

15 152

and assure that the only solution to Eq. (15.152) is κ 1 = κ2 = 0. Toward this goal, one may differentiate both sides of Eq. (15.152) as κ1 λ1 exp λ1 x + κ 2 λ2 exp λ2 x = 0,

15 153

thus producing an independent equation; note the functional similarity between Eqs. (15.152) and (15.153), with Eqs. (10.462) and (10.464), respectively. Consequently, one can retrieve Eq. (10.468) for the associated Wronskian, i.e. W

exp λ1 x

exp λ2 x

λ1 exp λ1 x

λ2 exp λ2 x

,

15 154

and make sure that the corresponding determinant is significant, i.e. exp λ1 x

exp λ2 x

λ1 exp λ1 x

λ2 exp λ2 x

0

15 155

Solution of Differential Equations

– in parallel to Eq. (10.469), with k1 and k2 replaced by κ1 and κ 2, respectively. With the aid of Eq. (1.10), one may rewrite Eq. (15.155) as exp λ1 x λ2 exp λ2 x − exp λ2 x λ1 exp λ1 x

0,

15 156

or else λ2 exp λ1 + λ2 x − λ1 exp λ1 + λ2 x

0

15 157

after lumping the exponential functions; division of both sides by exp{(λ1 + λ2)x} permits final simplification to λ2

λ1

15 158

– which conveys the condition of validity of Eq. (15.148) as general solution of Eq. (15.138). When λ2 = λ1 = λ, Eq. (15.158) is obviously not satisfied – so one should look for another function independent of exp{λx}; one may tentatively postulate y2 x

f x y1 x = f x exp λx ,

15 159

where f{x} denotes a function of x that remains to be found. Differentiation of both sides with regard to x transforms Eq. (15.159) to dy2 df = exp λx + f λ exp λx , dx dx

15 160

while the second derivative unfolds d 2 y2 d 2 f df df = exp λx + λ exp λx + λ exp λx + f λ2 exp λx dx dx dx2 dx2

15 161

– or, upon condensation of terms alike, d 2 y2 d 2 f df = exp λx + 2λ exp λx + λ2 f exp λx dx dx2 dx2

15 162

Insertion of Eqs. (15.159), (15.160), and (15.162) converts Eq. (15.138) to d2 f df 2 2 exp λx + 2λ dx exp λx + λ f exp λx dx , df exp λx + λ f exp λx + a0 f exp λx = 0 + a1 dx

15 163

which breaks down to d2 f df df 2 2 + 2λ dx + λ f + a1 dx + a1 λ f + a0 f = 0 dx

15 164

following division of both sides by exp{λx}; after factoring f and df/dx out, Eq. (15.164) becomes d2 f df 2 2 + 2λ + a1 dx + λ + a1 λ + a0 f = 0 dx

15 165

619

620

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

According to Eq. (15.146), the coefficient of the last term in Eq. (15.165) is nil – so it suffices to consider d2 f df 2 + 2λ + a1 dx = 0; dx

15 166

on the other hand, the sum of the roots of a quadratic equation, i.e. Eq. (15.146), equals the negative of the ratio of the coefficient of its linear term, i.e. a1, to the coefficient of its highest order term, i.e. 1 – in agreement with Eqs. (2.183) and (2.186). This means that 2λ = λ + λ = −a1/1 = −a1, since λ1 = λ2 = λ by hypothesis – thus supporting further simplification of Eq. (15.166) to d2 f df d 2 f = 0; 2 + − a1 + a1 dx = dx dx2

15 167

Eq. (15.167) may to advantage appear as d df =0 dx dx

15 168

Integration of Eq. (15.168) may now proceed via separation of variables, according to d

df = 0dx, dx

15 169

or else df = A1 dx

15 170

with A1 denoting an arbitrary constant; by the same token, Eq. (15.170) integrates to 15 171

df = A1 dx that is equivalent to f = A1 x + A2

15 172

– where A2 denotes another arbitrary constant. In view of Eq. (15.172), one may redo Eq. (15.159) as y2 x = A1 x + A2 exp λx ,

15 173

and its derivative will consequently look like dy2 = A1 exp λx + A1 x + A2 λ exp λx dx

15 174

– or else dy2 = A1 + λ A1 x + A2 dx

exp λx ,

15 175

after having factored exp{λx} out; hence, the Wronskian determinant associated to Eq. (15.149) becomes W =

exp λx

A2 + A1 x exp λx

λ exp λx

A1 + λA2 + λA1 x exp λx

,

15 176

Solution of Differential Equations

consistent with Eq. (10.468) – built at the expense of Eq. (15.150) with λ1 = λ, coupled to its derivative dy1 = λ exp λx , dx

15 177

as well as Eqs. (15.173) and (15.175) pertaining to y2. After factoring exp{λx} out of both columns in the above determinant, Eq. (15.177) becomes W = exp λx exp λx

1

A2 + A1 x

λ A1 + λA2 + λA1 x

15 178

due to Eq. (6.72), which simplifies to W = exp 2λx A1 + λA2 + λA1 x −λ A2 + A1 x

15 179

after lumping the exponential functions and recalling the definition of a second-order determinant; elimination of inner parenthesis in Eq. (15.179) gives rise to W = exp 2λx A1 + λA2 + λA1 x −λA2 − λA1 x ,

15 180

where cancellation of symmetrical terms permits dramatic simplification to W = A1 exp 2λx

0

15 181

as long as A1 0 (knowing that no constraint has so far been imposed upon A1). This confirms that y2{x} as per Eq. (15.159) is indeed a function independent of y1{x}, as given by Eq. (15.150) – on the basis of the condition labeled as Eq. (10.469). Once in possession of Eqs. (15.150) and (15.173), one may redo Eq. (15.149) to y = κ 1 exp λx + κ2 A2 + A1 x exp λx

15 182

with κ 1 and κ2 denoting arbitrary constants as before – which thus conveys the general solution when λ1 = λ2 = λ; upon factoring exp{λx} out and lumping constants alike, one eventually finds y = B1 + B2 x exp λx ,

15 183

with B1

κ 1 + A2 κ 2

15 184

B2

A1 κ 2

15 185

and holding as definition of the two expected (arbitrary) constants. Another situation of practical interest pertains to d2 y dy 2 + P x dx = S x , dx

15 186

i.e. a special case of Eq. (15.107) arising when Q{x} is nil; after recalling Eq. (15.49) – which implies dp d 2 y = dx dx2

15 187

621

622

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in parallel to Eq. (15.50), one can redo Eq. (15.186) as dp +P x p=S x dx

15 188

– i.e. a linear first-order differential equation of the type of Eq. (15.22). Hence, one may retrieve Eq. (15.34) as κ1 +

exp

P x dx S x dx

exp

P x dx

p=

15 189

after replacing y by p, as well as Q{x} by S{x}; insertion of Eq. (15.49) transforms Eq. (15.189) to dy = dx

κ1 +

exp

P x dx S x dx

exp

P x dx

,

15 190

which integrates to κ1 +

exp

P x dx S x dx

dy =

dx, exp

15 191

P x dx

via separation of variables – or else κ1 +

exp

P x dx S x dx

y = κ2 +

dx, exp

15 192

P x dx

where κ2 denotes a second arbitrary constant that complements κ 1 produced above. 15.1.2.2.1 Frobenius’ Method of Solution

A second-order linear ordinary differential equation often takes the forms T x

d2 y dy +Q x y=S x , +P x dx dx2

15 193

where P{x}, Q{x}, S{x}, and T{x} are polynomials of (finite) orders N, M, U, and V, respectively (or are functions well described thereby within the range of interest); Eq. (15.193) is more general than Eq. (15.107) owing to the x-dependent coefficient of d2y/dx2, but also less general because all said function coefficients take necessarily a polynomial form. The resulting equation may be coined in a more informational version as M d2 y V dy N i i t x + p x + y qi x i = i i dx i = 0 dx2 i = 0 i=0

U

si x i ,

15 194

i=0

where the polynomials have been made explicit via the ti s, pi s, qi s, and si s as coefficients.

Solution of Differential Equations

The regular strategy here is to find two independent solutions for the homogeneous equation associated with Eq. (15.193), i.e. d2 y dy + Q x y = 0, +P x 15 195 dx dx2 say y1{x} and y2{x}, and then find a particular integral, say Y{x}, as per Eq. (15.110), so that a general solution of Eq. (15.193) will be formulated similarly to Eq. (15.108) – or Eq. (15.135), for that matter. A tentative solution may accordingly be postulated as an infinite series, viz. T x



ai x m + i ,

y=

15 196

i=0

with a0, a1, a2, …, and m denoting (real) constants still to be determined. This method, due to Frobenius, is applicable if solutions are sought in the neighborhood of x = 0 – and both P{x} and Q{x} are finite at x = 0, or at least xP{x} and x2Q{x} remain finite around x = 0; if the reference point is a, then the independent variable should be redone to x − a. The first-order derivative of y with regard to x as per Eq. (15.196) looks like dy = dx



m + i ai x m + i−1

15 197

i=0

based on Eq. (10.29), while a second step of differentiation produces ∞

d2 y = dx2

m + i m + i −1 ai x m + i−2

15 198

i=0

from Eq. (15.197); insertion of Eqs. (15.196)–(15.198) transforms Eq. (15.195) to V

tj x j j=0



N

m + i m + i−1 ai x m + i−2 +

i=0 M

+

qj x

j

j=0

pj x j j=0



ai x

m+i



m + i ai x m + i −1

i=0

,

15 199

=0

i=0

where the algorithm of multiplication of polynomials conveyed by Eq. (2.139) supports ∞

V

ai tj m + i m + i −1 x m + i + j −2 +

j=0 i=0 M



N

ai pj m + i x m + i + j− 1

j=0 i=0 ∞

+

ai qj x

m+i+j

15 200

=0

j=0 i=0

After making the first terms explicit in the first and second outer summations of Eq. (15.200), one gets ∞



ai t0 m + i m + i −1 x m + i− 2 +

i=0

ai t1 m + i m + i −1 x m + i −1

i=0 V



+

ai tj m + i m + i− 1 x m + i + j− 2 +

j=2 i=0 N



+ j=1 i=0



ai p0 m + i x m + i −1

i=0

ai pj m + i x m + i + j −1 +

M



j=0 i=0

ai qj x m + i + j = 0

15 201

623

624

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– where the counting variable j in the first double summation may be reduced by two units, and the counting variable j in the second double summation may likewise be reduced by one unit to yield ∞



ai t0 m + i m + i− 1 x m + i−2 +

i=0

ai t1 m + i m + i− 1 x m + i − 1

i=0 ∞

+

ai p0 m + i x m + i−1 +

V −2 ∞

i=0

ai tj + 2 m + i m + i− 1 x m + i + j

15 202

j=0 i=0

N −1 ∞



M

ai pj + 1 m + i x m + i + j +

+ j=0 i=0

ai qj x m + i + j = 0

j=0 i=0

The second and third summations may then be collapsed, with concomitant factoring out of ai(m + i)xm+i−1, while the order in the double summations can be reversed as ∞



ai t0 m + i m + i− 1 x m + i−2 +

i=0

ai m + i t1 m + i− 1 + p0 x m + i −1

i=0 ∞ V −2

+ i=0 j=0 ∞

∞ N −1

ai tj + 2 m + i m + i −1 x m + i + j +

ai pj + 1 m + i x m + i + j

;

i=0 j=0

M

ai qj x m + i + j = 0

+ i=0 j=0

15 203 the outstanding composite summations may, in turn, be lumped and ai factored out to get ∞

ai t0 m + i m + i −1 x m + i− 2 +

i=0



ai m + i t1 m + i− 1 + p0 x m + i − 1

i=0 V −2 ∞

+

tj + 2 m + i m + i− 1 x m + i + j

j=0

ai

N −1

i=0

+

=0

M

pj + 1 m + i x m + i + j +

j=0

qj x m + i + j j=0

(15.204) One may now single out the first and second term in the first summation, as well as the first term in the second summation of Eq. (15.204) to obtain ∞

a0 t0 m m − 1 x m −2 + a1 t0 m m + 1 x m− 1 + + a 0 m t1 m − 1 + p0 x m − 1 + V −2 ∞

+

ai i=0



ai t0 m + i m + i− 1 x m + i −2

i=2

ai m + i t1 m + i −1 + p0 x m + i −1

i=1

,

tj + 2 m + i m + i− 1 x m + i + j

j=0 N −1

+

pj + 1 m + i x j=0

=0

M m+i+j

+

qj x j=0

m+i+j

15 205

Solution of Differential Equations

the counting variable of the first summation may, for convenience, be decreased by two units, and that of the second summation likewise decreased by one unit to get a0 t0 m m −1 x m− 2 + m a1 t0 m + 1 + a0 t1 m − 1 + p0 ∞

+



a i + 2 t0 m + i + 1 m + i + 2 x m + i +

i=0

x m −1

a i + 1 m + i + 1 t1 m + i + p 0 x m + i

i=0 V −2



tj + 2 m + i m + i− 1 x m + i + j

j=0

+

=0

ai

N −1

i=0

+

M

pj + 1 m + i x m + i + j +

j=0

qj x m + i + j j=0

15 206 – where terms in xm−1 were meanwhile collapsed, and m subsequently factored out. To facilitate algebraic handling hereafter, one may label the maximum of N, M, and V as W – thus implying that pN+1 = pN+2 = = pW = 0, qM+1 = qM+2 = = qW = 0, = tW = 0 in Eq. (15.206), and eliminate inner parentheses in the and tV+1 = tV+2 = second term. Under such circumstances, Eq. (15.206) becomes a0 t0 m m − 1 x m − 2 + m ma1 t0 + a1 t0 + ma0 t1 − a0 t1 + a0 p0 x m −1 ∞

+



ai + 2 t 0 m + i + 1 m + i + 2 x m + i +

i=0

ai + 1 m + i + 1 t1 m + i + p0 x m + i

i=0 W

+

j=0

=0

ai i=0

;

tj + 2 m + i m + i− 1 x m + i + j



W

W

pj + 1 m + i x m + i + j +

+ j=0

qj x m + i + j j=0

15 207 similar terms may then be condensed, and summations regrouped to obtain a0 t0 m m −1 x m− 2 + m a0 t1 + a1 t0 m + a1 t0 + a0 p0 − t1 x m− 1 a i + 2 t0 m + i + 1 m + i + 2 x m + i ∞

+

+ a i + 1 m + i + 1 p0 + t1 m + i x m + i

=0

W

i=0

qj + pj + 1 m + i + tj + 2 m + i m + i− 1 x m + i + j

+ ai j=0

15 208

625

626

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– where m, a0, and xm+i+j were meanwhile factored out (as appropriate). A final algebraic step, namely, spelling out of the terms corresponding to j = 0 in the inner summation, is useful for it produces a0 t0 m m − 1 x m −2 + m a0 t1 + a1 t0 m + a1 t0 + a0 p0 − t1 x m −1 ai + 2 t 0 m + i + 1 m + i + 2 x m + i ∞

+ ai + 1 m + i + 1 p0 + t1 m + i x m + i + ai q0 + p1 m + i + t2 m + i m + i −1 x m + i

+ i=0

=0

,

W

qj + pj + 1 m + i + tj + 2 m + i m + i− 1 x m + i + j

+ ai j=1

15 209 where terms in xm+i may be collapsed to finally attain a0 t0 m m − 1 x m −2 + m a0 t1 + a1 t0 m + a1 t0 + a0 p0 − t1 x m −1 a i + 2 t0 m + i + 1 m + i + 2 + a i + 1 m + i + 1 p 0 + t1 m + i



+

xm + i

+ ai q0 + p1 m + i + t2 m + i m + i−1 i=0

=0

W

qj + pj + 1 m + i + tj + 2 m + i m + i− 1 x m + i + j

+ ai j=1

15 210 If Eq. (15.196) is to be a solution of Eq. (15.195) applicable to every x, then the coefficients of all powers of x in Eq. (15.210) must separately vanish – and, in particular, a 0 t0 m m − 1 = 0 corresponding to x

15 211

m−2

, complemented by

m a0 t1 + a1 t0 m + a1 t0 + a0 p0 −t1

=0

15 212

m−1

associated with x . Two distinct values for m are typically sought, since they would give rise (in principle) to as many independent solutions, y1{x} and y2{x}, using Eq. (15.196) as template. If t0 0, then Eq. (15.211) enforces m=0 m=1

t0 0

,

15 213

assuming that a0 – unconstrained so far but to be eventually set by boundary condition(s), is not nil; while Eq. (15.212) supports m=0

t0 0

15 214

since any form m ≡ m{a0, a1} is to be excluded for its dependence on still unknown constants – so two possible values (in total) are possible for m, i.e. m = 0 and m = 1, as originally sought.

Solution of Differential Equations

If t0 = 0, and both p0 and t1 are also non-nil, then Eq. (15.211) proves not informative, yet Eq. (15.212) conveys m=0 m=

a 0 t1 − p 0 a 0 t1

15 215 t0 = 0 p0 0 t1 0

– or else m = 0 m = 1−

p0 t1

15 216 t0 = 0 p0 0 t1 0

after dropping a0 from both numerator and denominator, and then splitting the outcome; once again, two values in total are at stake if p0 t1, but a single one (i.e. m = 0) if p0 = t1 – while no conclusion can be drawn on a second value for m (further to m = 0) when p0 = t1 = 0. Equations (15.213) and (15.216) are termed indicial equations – a standard feature of Frobenius’ method, in that solution thereof conveys the two possible values of index m associated with the two (putatively) independent solutions of Eq. (15.195). If t0 0 and m = 1 in agreement with Eq. (15.211), then Eq. (15.212) implies 15 217 a0 t1 + a1 t0 + a1 t0 + a0 p0 − a0 t1 = 0 that readily simplifies to a1 a0 = −

p0 a0 2t0

15 218 t0 0 m=1

upon isolation of a1; Eq. (15.212) becomes useless if m = 0 though – so a1 would already appear as free. Furthermore, the constraint of nil coefficient for the term in xm (corresponding to i = 0, since i + j = 0 is not possible as 1 ≤ j ≤ W) yields a2 t0 m + 1 m + 2 + a1 m + 1 p0 + t1 m + a0 q0 + p1 m + t2 m m −1

= 0, 15 219

stemming from the first term in parenthesis of Eq. (15.210) – which, for t0 leads to p 0 + t1 m q0 + p1 m + t2 m m − 1 a2 = − a1 − a0 t0 m + 2 t0 m + 1 m + 2

0, eventually 15 220

upon isolation of a2. If m = 0, then Eq. (15.220) simplifies to a2 a 0 , a1 = −

p0 q0 a1 − a0 2t0 2t0

,

15 221

t0 0 m=0

or instead to a 2 a 0 , a1 = −

p0 + t1 p1 + q0 a1 − a0 3t0 6t0

15 222 t0 0 m=1

after setting m = 1. Therefore, two recursive relations, of step 2, arise when t0 0 – i.e. a2 ≡ a2{a0, a1} as per either Eq. (15.221) or Eq. (15.222), depending on whether m = 0 or m = 1, respectively; however, Eq. (15.218), also applicable for m = 1, conveys an extra relationship between a1 and a0, i.e. a1 ≡ a1{a0} – which, combined with Eq. (15.222), indicates

627

628

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

that a2 ≡ a2{a0}. In other words, the solution generated with m = 1 possesses one independent constant (i.e. a0), but is contained in the solution generated with m = 0 that possesses two independent constants (i.e. a0 and a1); this normally arises when the two values for m differ by an integer (as is the current case). The remaining coefficients (a3, a4, …) will become available after solving, for xm+1, xm+2, …, xm+k, the equality based on a nil value for the summation in Eq. (15.210) – which requires combination of values of i and j, such that i = k or i + j = k; due to the functional form at stake, al will eventually appear as al = α l a0 + β l a1

15 223

with αl and βl denoting constants, so the general solution to Eq. (15.195) will actually read y x = A1 y1 x + A2 y2 x ∞

= a0 l=0 ∞

=



αl p, q, t x1 + l + a1

βl p, q, t x1 + l

15 224

l=0

a0 α l + a1 β l x 1 + l

l=0

Therefore, the independence sought for y1{x} and y2{x} fails to apply in this situation, due to pooling of powers of x – meaning that some alternate strategy will be necessary to find another independent solution (as will be explored in due course). Conversely, Eq. (15.219) degenerates to a 1 m + 1 p0 + t1 m + a 0 q 0 + p 1 m + t2 m m − 1 = 0

15 225

when t0 = 0, where a1 may be isolated as a1 = −

q0 + p1 m + t2 m m − 1 a0 ; m + 1 p0 + t1 m

15 226

if m = 0, then Eq. (15.226) becomes a1 a0 = −

q0 a0 p0

,

15 227

t0 = 0 m=0

while m = 1 − p0/t1 as per Eq. (15.216) gives rise to q0 + p1 1 − a1 = −

p0 p0 + t2 1 − t1 t1

p0 1− + 1 t1

p 0 + t1

1−

p0 −1 t1

p0 1− t1

a0

15 228

that simplifies to q0 + 1 − a1 a0 = −

p0 p0 p 1 − t2 t1 t1 a0 2t1 −p0

15 229 t0 = 0 m = 1−

p0 t1

Solution of Differential Equations

In this case, the two possible values for m do not differ by an integer – as long as p0/t1 is not an integer itself; the general solution of Eq. (15.195) then reads ∞

y x = A1 y1 x + A2 y2 x = a0, 1

αl p,q,t x l + a0, 2

l=0



βl p, q, t x

p 1− t 0 + l 1

l=0

15 230 en lieu of Eq. (15.224) – where a0,1 and a0,2 denote the values of arbitrary constant a0 when m = 0 and m = 1 − p0/t1, respectively. A noninteger value for p0/t1 precludes the possibility of lumping terms of the first summation with those of the second summation, because they never share the same power of x; hence, two independent solutions, y1{x} and y2{x}, will indeed be generated. To illustrate the calculation of higher-order coefficients, one will retrieve the foregoing conditions to ascertain a3; toward this goal, one should write a 3 t0 m + 2 m + 3 + a 2 m + 2 p 0 + t1 m + 1

+ a1 q0 + p1 m + 1 + t2 m m + 1

+ a 0 q 1 + p 2 m + t3 m m − 1 = 0 15 231 stemming from Eq. (15.210) – corresponding to the coefficient of x or when i = 0 and j = 1. Should t0 0, then a3 may be obtained from Eq. (15.231) as

m+1

a3 = −

obtained when i = 1,

p0 + t1 m + 1 q0 + p1 m + 1 + t2 m m + 1 q1 + p2 m + t3 m m −1 a2 − a1 − a0 t0 m + 3 t0 m + 2 m + 3 t0 m + 2 m + 3

15 232 Two possibilities now exist for m, in view of Eq. (15.213); if m = 0, then Eq. (15.232) degenerates to a3 = −

p 0 + t1 q0 + p1 q1 a2 − a1 − a0 3t0 6t0 6t0

,

15 233

t0 0 m=0

where insertion of Eq. (15.221) yields a3 =

p 0 + t1 p 0 q0 q 0 + p1 q1 a1 + a0 − a1 − a0 ; 3t0 2t0 2t0 6t0 6t0

15 234

upon elimination of parentheses and condensation of terms alike, Eq. (15.234) becomes a3 a0 , a1 =

1 p 0 p 0 + t1 1 q0 p0 + t1 −p1 −q0 a1 + − q1 a0 6t0 6t0 t0 t0

, t0 0 m=0

15 235 thus confirming dependence of a3 on both a0 and a1 as found previously for a2, with α3 ≡ α3{p0, q0, q1, t0, t1} and β3 ≡ β3{p0, p1, q0, t0, t1} as per Eq. (15.223). By the same token, the solution associated with m = 1 becomes a3 = −

p0 + 2t1 q0 + 2 p1 + t2 q 1 + p2 a2 − a1 − a0 4t0 12t0 12t0

15 236

629

630

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

based on Eq. (15.232), where insertion of Eq. (15.222) unfolds a3 =

p0 + 2t1 p0 + t1 p1 + q0 q 0 + 2 p1 + t2 q1 + p2 a1 + a0 − a1 − a0 ; 4t0 3t0 6t0 12t0 12t0

15 237

after combining terms in a0 and separately combining terms in a1, one gets a3 a0 ,a1 = 1 + 12t0

1 12t0

p0 + t1 p0 + 2t1 − q0 − 2 p1 + t2 t0

p1 + q0 p0 + 2t1 −p2 − q1 a0 2t0

a1 ,

15 238

t0 0 m=1

so a3 ≡ a3{a0, a1}, besides α3 ≡ α3{p0, p1, p2, q0, q1, t0, t1} and β3 ≡ β3{p0, p1, q0, t0, t1, t2}, and following Eq. (15.223) as template. In the case of t0 = 0, Eq. (15.231) becomes a2 = −

q0 + p1 m + 1 + t2 m m + 1 q1 + p2 m + t3 m m − 1 a1 − a0 m + 2 p 0 + t1 m + 1 m + 2 p0 + t1 m + 1

15 239

following isolation of a2; if, in addition, m = 0, then Eq. (15.239) simplifies to q 0 + p1 q1 a1 − a0 15 240 a2 = − 2 p 0 + t1 2 p 0 + t1 – where combination with Eq. (15.227) unfolds q0 + p1 q0 q1 a0 − a0 , a2 = 2 p 0 + t1 p0 2 p 0 + t1

15 241

or else a2 a0 =

1 2 p0 + t1

q0 p1 + q0 − q1 a0 p0

15 242 t0 = 0 m=0

after factoring 1/2(p0 + t1) out. Note again the univariate dependence of a2 on a0, with α2 ≡ α2{p0, p1, q0, q1, t1} following the functionality of Eq. (15.223). The other possibility for m has been conveyed also by Eq. (15.216), i.e. m = 1 − p0/t1, and produces instead q0 + p1 1 − a2 = −

p0 1− + 2 t1 q1 + p2



p0 p0 + 1 + t2 1 − t1 t1 p 0 + t1

p0 p0 + t3 1 − 1− t1 t1

p0 1− + 2 t1

p0 + t1

1−

p0 +1 t1

p0 1− + 1 t1 p0 1 − −1 t1

p0 1− + 1 t1

a1

15 243 a0

Solution of Differential Equations

from Eq. (15.239); after condensing terms alike, one gets q0 + p1 + t2 1 − a2 = −



p0 3− t1

p0 t1

p0 + t1

2− p0 2− t1

p0 q1 + p2 − t3 t1

p0 1− t1

p0 3− t1

p0 2− t1

p0 + t1

p0 t1

a1

15 244 a0

In view of Eq. (15.229), one should rewrite Eq. (15.244) as q0 + p1 + t2 1 − a2 = 3−



p0 t1

p0 t1

2−

p 0 + t1 2 −

q0 + 1 −

p0 t1

p0 q1 + p2 − t3 t1

p0 1− t1

p0 3− t1

p0 2− t1

p 0 + t1

p0 t1

p0 p0 p 1 − t2 t1 t1 a0 2t1 − p0 ,

15 245

a0

where factoring out as appropriate eventually leads to

q0 + p1 + t2 1−

a2 a0 =

p0 + t1

2−

p0 p0 p1 −t2 t1 t1 2t1 −p0

p0 t1

q0 + 1−

1 p0 3− t1

p0 t1

× p0 2− t1

− q1 + p2 −t3

p0 t1

1−

p0 t1

;

a0

t0 = 0 m = 1−

p0 t1

15 246 once again, a2 ≡ a2{a0}, whereas β2 ≡ β2{p0, p1, p2, q0, q1, t1, t2, t3} in agreement with Eq. (15.223). The above reasoning may then be repeated at will, thus generating enough a’s to assure convergence of the function under scrutiny – as per some preestablished criterion.

631

632

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

15.1.2.2.2 Bessel’s Equation

One excellent illustration of Frobenius’ method developed previously encompasses (series) solution of the linear second-order ordinary differential equation reading x2

d2 y dy 2 2 2 + x dx + x −ν y = 0 dx

15 247

– i.e. a particular case of Eq. (15.194), usually known as Bessel’s equation; in this case, t0 = t1 = t3 = t4 = = 0 and t2 = 1, p0 = p2 = p3 = = 0 and p1 = 1, q0 = − ν2, q1 = q3 = = 0 and q2 = 1, and s0 = s1 = s2 = = 0. Since t0 = 0, one cannot resort to q4 = Eq. (15.213), while the use of Eq. (15.216) is precluded due to p0 = t1 = 0; one should therefore move one step forward to Eq. (15.219) in attempts to calculate m – revisited as a0 q0 + p1 m + t2 m m −1

=0

15 248

that reflects p0 = t0 = t1 = 0. Further advantage may be gained from p1 = t2 = 1, as well as q0 = −ν2 to rewrite Eq. (15.248) as a0 m m −1 + m −ν2 = 0;

15 249

Eq. (15.249) degenerates to a0 m2 −m + m −ν2 = 0

15 250

upon removal of inner parenthesis, or else a0 m2 −ν2 = 0

15 251

following cancellation of symmetrical terms. Consequently, Eq. (15.251) may (and will have to) serve as indicial equation, thus eventually supporting m2 − ν2 = 0,

15 252

which is in turn equivalent to m= ±ν

15 253

– since a0 0, by hypothesis; Eq. (15.231) – arising from the constraint of a nil coefficient for terms in xm+1, will accordingly consubstantiate the first usable recurrence relationship between ai’s, i.e. a1 q0 + p1 m + 1 + t2 m m + 1

= 0,

15 254

due to p0 = p2 = q1 = t0 = t1 = t3 = 0, while p1 = t2 = 1 and q0 = −ν enforce, in turn, 2

a1 −ν2 + m + 1 + m m + 1 = 0

15 255

Upon factoring m + 1 out, Eq. (15.255) becomes a1 m + 1 2 − ν2 = 0;

15 256

insertion of Eq. (15.253) then supports transformation to a1 1 ± v 2 −ν2 = 0

15 257

Application of Newton’s binomial formula as per Eq. (2.237) converts Eq. (15.257) to a1 1 ± 2ν + ν2 − ν2 = 0,

15 258

Solution of Differential Equations

which simplifies to a1 1 ± 2ν = 0

15 259

after ν has cancelled out with its negative; except for the (unusual) case of ν = ± ½, Eq. (15.259) implies 2

a1 = 0

15 260

One should now set equal to zero the coefficient of the term of Eq. (15.210) in xm+2, according to a 4 t0 m + 3 m + 4 + a3 m + 3 p0 + t1 m + 2 + a 2 q 0 + p 1 m + 2 + t2 m + 2 m + 1

15 261

+ a 0 q 2 + p 3 m + t4 m m − 1 + a 1 q 1 + p 2 m + 1 + t3 m + 1 m = 0 – at the expense of the coefficient of xm+i after setting i = 2 – as well as the coefficients of xm+i+j when i = 0 and j = 2 or i = j = 1; since p0 = p2 = p3 = q1 = t0 = t1 = t3 = t4 = 0, Eq. (15.261) simplifies to a2 q0 + p1 m + 2 + t2 m + 1 m + 2

+ a0 q 2 = 0

15 262

– whereas realization that q0 = −ν2 and p1 = q2 = t2 = 1 allows further simplification to a2 −ν2 + m + 2 + m + 1 m + 2 + a0 = 0

15 263

After factoring m + 2 out, Eq. (15.263) becomes a2 m + 2 1 + m + 1 −ν2 + a0 = 0

15 264

or, equivalently, a2 m + 2 2 −ν2 + a0 = 0;

15 265

a2 may finally be isolated as a2 = −

a0

15 266

m + 2 2 − ν2

One may perform further steps along these lines, namely, regarding the coefficient of xm+i by setting i = 3, and the coefficients of xm+i+j by setting (i,j) equal to (0,3), (1,2), or (2,1), according to a 5 t0 m + 4 m + 5 + a4 m + 4 p0 + t1 m + 3 + a 3 q 0 + p 1 m + 3 + t2 m + 3 m + 2

15 267

+ a 0 q 3 + p 4 m + t5 m m − 1 + a 1 q 2 + p 3 m + 1 + t4 m + 1 m + a 2 q 1 + p 2 m + 2 + t3 m + 2 m + 1

=0

633

634

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– thus accounting for all terms in xm+3 included in Eq. (15.210); based on p0 = p2 = p3 = p4 = q1 = q3 = t0 = t1 = t3 = t4 = t5 = 0, Eq. (15.267) breaks down to a3 q0 + p1 m + 3 + t2 m + 2 m + 3

+ a1 q2 = 0,

15 268

while further consideration of q0 = −ν2 and p1 = q2 = t2 = 1 allows simplification to a3 −ν2 + m + 3 + m + 2 m + 3

+ a1 = 0

15 269

– or, upon factoring m + 3 out, a3 m + 3 1 + m + 2 − ν2 + a1 = 0

15 270

Equation (15.270) is equivalent to a3 m + 3 2 − ν2 + a1 = 0,

15 271

where isolation of a3 yields a3 = −

a1 m + 3 2 −ν2

;

15 272

combination with Eq. (15.260) permits simplification of Eq. (15.272) to a3 = 0

15 273 m+4

One may as well retrieve Eq. (15.210) reduced to the terms in x

, i.e.

a 6 t0 m + 5 m + 6 + a 5 m + 5 p 0 + t1 m + 4 + a4 q0 + p1 m + 4 + t2 m + 4 m + 3 + a0 q4 + p5 m + t6 m m − 1

15 274

+ a1 q3 + p4 m + 1 + t5 m + 1 m + a2 q2 + p3 m + 2 + t4 m + 2 m + 1 + a3 q1 + p2 m + 3 + t3 m + 3 m + 2 = 0 – where the first three terms are associated with xm+4 as directly obtained from xm+i, and the remaining four terms are accounted for by xm+0+4, xm+1+3, xm+2+2, and xm+3+1 as generated by xm+i+j; recalling again that p0 = p2 = p3 = p4 = p5 = q1 = q3 = q4 = t0 = t1 = t3 = t4 = t5 = t6 = 0, one obtains a4 q0 + p1 m + 4 + t2 m + 3 m + 4

+ a2 q2 = 0,

15 275

whereas q0 = −ν and p1 = q2 = t2 = 1 allow further simplification to 2

a4 −ν2 + m + 4 + m + 3 m + 4

+ a2 = 0

15 276

One may now factor m + 4 out in Eq. (15.276) as a4 m + 4 1 + m + 3 − ν2 + a2 = 0

15 277

that coincides with a4 m + 4 2 − ν2 + a2 = 0;

15 278

Solution of Differential Equations

solution for a4 unfolds a2 a4 = − , m + 4 2 − ν2

15 279

where combination with Eq. (15.266) leads to a0

a4 =

m+2

2

15 280

m + 4 2 − ν2

− ν2

Comparative inspection of Eqs. (15.260), (15.266), (15.273), and (15.280), as well as all expressions for the a’s thereafter indicate that every ai is nil when i is odd, i.e. a2i− 1 = 0; i = 1, 2,…,

15 281

whereas even i produces − 1 i a0 a2i = i

; i = 1,2, …

2

2j + m − ν

(15.282)

2

j=1

Remember that two possible values exist for m as per Eq. (15.253), so Eq. (15.282) may be transformed to −1 i a0 ; 15 283 a2i = i 2j ± ν 2 −ν2 j=1

Newton’s binomial formula may then be invoked to write − 1 i a0 , a2i = i

15 284

4j ± 4jν + ν − ν 2

2

2

j=1

where cancellation of symmetrical terms coupled with factoring out of 4j unfold −1 i a0

a2i =

i

=

−1 i a0 i

4j j ± ν

j j±ν

4

j=1

15 285

i

j=1

j=1

– or else − 1 i a0

a2i =

i

; i = 1, 2,…,

15 286

j j±ν

i

4

j=1

once the definition of power is recalled. After retrieving Eq. (15.196), one concludes that the solution of Eq. (15.247) looks like ∞



y=

a2i x i=0

2i + m

−1 i a0

=

i i=0

j j±ν

i

4

j=1

∞ 2i ± ν

x

= a0 x

±ν

−1 i x2i

1+

i i=1

4

,

j j±ν

i j=1

15 287

635

636

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with the aid of Eqs. (15.253) and (15.286) – once a0x±ν has been factored out, and the first term of the summation made explicit; 4i = (22)i = 22i in denominator may now be lumped with x2i in numerator, and the extended product of a product splitted, in turn, to generate ∞

y = a0 x

±ν

−1

1+

i i=1

i

x 2

i

j±ν

j

2i

15 288

j=1 j=1

The first extended product in denominator of Eq. (15.288) represents the factorial of i; if ν were an integer n and only the positive sign preceding it were considered, then one would get y = a0 x n 1 +



−1 i n + 1 n + 2 i=1 ∞

= a0 x n 1 +

i=1

= a0 x n 1 +

i

n+i

x 2

−1 i 1 2 n n n+1 n+2

i1 2



− 1 in i n+i i=1

x 2

2i

n+i

x 2

2i

15 289

2i

– along with convenient multiplication of both numerator and denominator by 1, 2, …, n. In the general case of Eq. (15.288), one should resort to the continuous (and real) equivalent of n! and (i + n)! as conveyed by Eq. (12.400), viz. y = a0 x ± ν 1 +

∞ i=1

−1 i Γ ν + 1 i Γ i±ν+1

x 2

2i

,

15 290

using Eq. (15.289) as template; upon algebraic rearrangement, complemented with Eq. (12.398), one obtains y = a0 2 ± ν

x 2

±ν

Γ ν+1



−1 i i Γ i±ν+1 i=0

x 2

2i

= A0 J ± ν x

15 291

– as long as (lumped) constant A0 is defined as A0

a0 2 ± ν Γ ν + 1 ,

15 292

and at the expense of Bessel’s functions of the first kind, of order ν or −ν, given by ∞

J±ν x

−1 i i Γ i±ν+1 i=0

x 2

2i ± ν

15 293

Therefore, the general solution of Eq. (15.247) may be coined as y x = A1 y1 x + A2 y2 x = A1 J −ν x + A2 Jν x

15 294

– where A1 and A2 denote arbitrary constants that materialize A0 as per Eq. (15.292) for – ν and ν, respectively, serving as exponent of 2. The definition conveyed by Eq. (15.293) was originally proposed by Swiss mathematician Daniel Bernoulli

Solution of Differential Equations

(1700–1782), and later generalized by German astronomer, mathematician, and physicist Friedrich Bessel (1784–1846). For ν = 0, Bessel’s function laid out by Eq. (15.293) takes the form ∞

J0

J±ν

ν=0

=

−1 i i Γ i+1 i=0

x 2

2i



= i=0

−1 ii

i

x 2

2i



= i=0

−1 i2

i

x 2

2i

15 295

with the aid again of Eq. (12.400) – as illustrated in Fig. 15.3. Note the somewhat oscillatory behavior of J0, along with a gradually decreasing amplitude with larger x. Differentiation of J0 with regard to x reads dJ 0 = dx

∞ i=0

−1 i x 2i 2 ii

2i− 1 1

2



−1 i i i−1 i=1

=

x 2

2i − 1

15 296

J1 x

based on Eq. (15.295), and is also graphically sketched in Fig. 15.3; a similar damped oscillatory behavior can again be observed, although coupled with a horizontal translation rightward, with the curve taking off at the origin. If ν is an integer n at large, then Eq. (15.293) will read ∞

J±n x =

−1 i i i±n i=0

x 2

2i ± n

,

15 297

with the aid of Eq. (12.400); the first (negative) n terms of J−n are zero by virtue of the factorial function of negative integers (accessible only via gamma function, as per its original definition) in denominator being infinite, so Eq. (14.297) reduces to ∞

J−n =

−1 i i i− n i=n

x 2

2i− n

15 298

in that case. On the other hand, setting j ≡ i − n turns Eq. (15.298) to ∞

J−n = j=0

−1 j+n x j+n j 2

2 j + n −n

,

15 299

Figure 15.3 Variation of Bessel’s functions of the first kind, of order zero (J0) or one (J1), and of the second kind, of order zero (Y0), with their argument, x.

J0{x}, J1{x}, Y0{x}

1

J0

J1 0

–1

10 Y0

x

637

638

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

which is equivalent to writing ∞

J−n =

−1 j + n x j j+n 2 j=0

2j + n

− 1 n Jn ,

15 300

after (−1)n is taken off the summation – and recalling Eq. (15.297); therefore, Jn and J−n are linearly dependent of each other, so either one might play the role of y1{x} in Eq. (15.294) – and a second (independent) solution, y2{x}, must somehow be found. One way to fulfill this goal resorts to y2 x

Y0 x

lim Yν x

ν

lim

ν 0

0

Jν x cos νπ − J − ν x ; sin νπ

15 301

Yν is known as Weber’s function (on behalf of H. M. Weber, who proposed it in 1873) or Neumann’s function (after Carl Neumann), or even Bessel’s function of the second kind – and is plotted in Fig. 15.3 as well. The almost coincidence between Y0{x} and J1{x} is notorious – except at small values of x, where a singularity of the former is apparent. Note that Yν{x} as per Eq. (15.301) is a linear combination of Jν{x} and J−ν{x} – with weights equal to cos νπ/sin νπ and −1/sin νπ, respectively; and both of them are, in turn, solutions of Bessel’s equation – so Yν{x} is expected to be a solution of Eq. (15.247) on its own. This claim may be corroborated by calculating its first derivative, viz. dY ν dx

dJ ν dJ −ν cos νπ − d Jν cos νπ − J −ν dx dx , = dx sin νπ sin νπ

15 302

based on Eq. (15.301), as well as its second derivative as 2

d Yν dx2

dJ ν dJ − ν d dx cosνπ − dx dx sin νπ

d 2 Jν d 2 J −ν cos νπ − 2 dx2 = dx sin νπ

15 303

stemming from Eq. (15.302); insertion of Eqs. (15.301)–(15.303) transforms the left-hand side of Eq. (15.247) to

x2

2

d y dy 2 2 2 + x dx + x − ν y dx

d 2 Jν d 2 J −ν dJ ν dJ −ν cos νπ − cos νπ − 2 2 dx + x dx dx = x2 dx sin νπ sin νπ y = Yν + x2 −ν2

Jν cos νπ −J −ν sin νπ 15 304

or, after factoring cos νπ/sin νπ and 1/sin νπ out, x2

d2 y dy + x + x2 − ν2 y dx dx2

= y = Yν

cosνπ 2 d 2 Jν dJ ν x + x2 − ν2 Jν +x sin νπ dx dx2 −

1 d2 J −ν dJ − ν 2 2 x2 2 + x dx + x −ν J − ν sin νπ dx 15 305

Solution of Differential Equations

Since Jν and J−ν are solutions of Eq. (15.247) as per Eq. (15.294), one obtains x2

d2 y dy + x + x2 − ν2 y dx dx2

= y = Yν

cosνπ 1 0− 0 sin νπ sin νπ

15 306

from Eq. (15.305), or equivalently x2

d2 y dy + x + x2 − ν2 y dx dx2

= 0;

15 307

y = Yν

Eq. (15.307) confirms that Yν is indeed a solution to Eq. (15.247). To finally check the (putative) independence of Y0 and J0, one should compute the determinant of the associated Wronskian matrix, viz. J0 Y0 15 308 dJ 0 dY 0 , dx dx as per Eq. (10.468), and investigate whether it is different from zero; before proceeding any further, it should be emphasized that for any differential equation of the type W

d2 y dy +Θ x y 15 309 =Ω x dx dx2 en lieu of Eq. (15.247) – with (independent) solutions y1{x} and y2{x}, the corresponding Wronskian determinant reads y1 y2 dy2 dy 15 310 − y2 1 , dy1 dy2 = y1 dx dx dx dx as per Eqs. (1.10) and (10.468) – where Ω{x} and Θ{x} denote generic functions of x. Differentiation, with regard to x, of both sides of Eq. (15.310) unfolds W

dW d dy dy dy dy d 2 y2 dy dy d 2 y1 y1 2 − y2 1 = 1 2 + y1 2 − 2 1 − y2 2 , = dx dx dx dx dx dx dx dx dx dx

15 311

resorting to the rules of differentiation of a sum and a product – where symmetrical terms drop out to leave dW d 2 y2 d 2 y1 = y1 2 − y2 2 ; dx dx dx y1{x} and y2{x} being, by hypothesis, solutions of Eq. (15.309) implies that d 2 y1 dy = Ω 1 + Θy1 dx dx2

15 312

15 313

and d 2 y2 dy2 2 = Ω dx + Θy2 , dx respectively – so Eq. (15.312) becomes dW dy dy = y1 Ω 2 + Θy2 − y2 Ω 1 + Θy1 dx dx dx

15 314

15 315

639

640

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

upon insertion of Eqs. (15.313) and (15.314). Once parentheses are eliminated, and Ω and Θ factored out afterward, Eq. (15.315) turns to dW dy dy dy dy = Ω y1 2 − y2 1 + Θ y1 y2 −y2 y1 = Ω y1 2 − y2 1 dx dx dx dx dx

15 316

– where y1y2 meanwhile cancelled off with its negative; Eq. (15.310) allows reformulation of Eq. (15.316) to dW =Ω W dx

15 317

The first-order ordinary differential equation labeled as Eq. (15.317) may be solved via separation of variables as dW = Ω x dx W

15 318

that degenerates to ln W = c + Ω x dx

15 319

– where c denotes an arbitrary constant; after taking exponentials of both sides, Eq. (15.319) yields W = e c exp

Ω x dx

C exp

Ω x dx ,

15 320

with C denoting an alternative form for said constant (i.e. ec, obviously independent of x). After rewriting Eq. (15.247) as d2 y x dy x2 − ν2 1 dy ν2 2 = − x2 dx − x2 y = − x dx − 1− x2 y dx

15 321

upon isolation of the second-order derivative, one finds that Ω x



1 x

15 322

and ν 2 −1 15 323 x following comparative inspection of Eqs. (15.309) and (15.321); insertion of Eq. (15.322) transforms Eq. (15.320) to Θ x

W = C exp −

1 dx = x

C exp

dx x

15 324

or else W =

C C = exp ln x x

15 325

Solution of Differential Equations

– at the expense of the second entry in Table 11.1, coupled with the fact that composition of an exponential with the corresponding logarithm leaves the argument unchanged (for being the reverse of each other). Since C does not depend on x (although it might depend on ν), one may seek a convenient value of x to calculate it, namely, x 0; recalling Eq. (15.308), rewritten for a generic ν for convenience, and the definition of second-order determinant, one has it that W = Jν

dY ν dJ ν − Yν , dx dx

15 326

where Jν should abide to lim Jν x =

x

0

−1 0 0 Γ 0+ν+1

x 2

0+ν

=

1 Γ 1+ν

x 2

ν

15 327

based on Eqs. (9.108) and (15.293) – after realizing that only very small values of x are x3 as x 0. x2 of interest, since higher order terms can be neglected for x When ν is replaced by −ν, Eq. (15.237) gives instead lim J − ν x =

x

0

1 Γ 1−ν

x 2

−ν

;

15 328

recalling Eq. (15.301), one may further write

lim Yν =

x

cosνπ Γ 1+ν

0

x ν 1 − 2 Γ 1−ν sin νπ

x 2

−ν

15 329

upon insertion of Eqs. (15.327) and (15.328). The derivative of Jν, in the vicinity of x = 0, reads as lim

x

0

dJ ν x d 1 = dx Γ 1 + ν dx

x 2

ν

=

ν 2Γ 1 + ν

x 2

ν− 1

15 330

after retrieving Eq. (15.327); by the same token, Eq. (15.329) supports

lim

x

0

dY ν d = dx dx

cos νπ Γ 1+ν

x ν 1 − 2 Γ 1−ν sin νπ

x 2

−ν

x 2

− ν− 1

15 331 =

ν cos νπ 2Γ 1 + ν

x 2

ν 2Γ 1 − ν sin νπ

ν− 1

+

Insertion of Eqs. (15.327) and (15.329)–(15.331) transforms Eq. (15.326) to

lim W =

x

0

1 Γ 1+ν −

cos νπ Γ 1+ν

x 2

ν

ν cos νπ 2Γ 1 + ν

x ν 1 − 2 Γ 1 −ν sin νπ

x 2

ν 2Γ 1 − ν sin νπ

ν− 1

x 2

+

x 2

− ν− 1

,

−ν

ν 2Γ 1 + ν

x 2

ν −1

15 332

641

642

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

which may be algebraically rearranged to read

lim W =

x

0

1 2Γ 1 + ν sin νπ

ν cos νπ x 2ν −1 ν x −1 + Γ 1+ν 2 Γ 1−ν 2 2ν− 1 ν cos νπ x ν x −1 − + Γ 1+ν 2 Γ 1 −ν 2

15 333

via factoring 1/2Γ{1 + ν} and 1/sin νπ out, besides collapsing powers of x/2; upon condensation of terms alike, and cancellation of 2 between numerator and denominator afterward, Eq. (15.333) degenerates to 2ν x −1 ν 2 Γ 1 −ν 2 lim W = = x 0 2Γ 1 + ν sin νπ Γ 1 + ν Γ 1 −ν sin νπ x 2ν 1 = Γ 1 + ν Γ 1 −ν sin νπ x

15 334

One may now retrieve Eq. (12.473), following replacement of Γ{x} with the aid of Eq. (12.412), viz. Γ 1 −x

Γ x+1 π ; = sin πx x

15 335

Eq. (15.335) promptly originates Γ 1 + x Γ 1 −x =

πx sin πx

15 336

The result conveyed by Eq. (15.336) may be taken advantage of once x has been replaced by ν, as it allows transformation of Eq. (15.334) to 2ν 1 2ν 1 2 = = , lim W = πν 0 sin νπ x πν x πx sin πν

x

15 337

where common factors meanwhile dropped off between numerator and denominator; coincidentally, |W||x 0 is the same irrespective of ν. Inspection of Eq. (15.337) vis-à-vis with Eq. (15.325) implies C=

2 π

15 338

– so one concludes that 2 W =π x

0

15 339

and, in particular, J 0 Y0 0 15 340 dJ 0 dY 0 dx dx as per Eq. (15.308); therefore, Y0{x} is in fact independent of J0{x}, besides satisfying Eq. (15.307) – so it is suitable for use as y2{x} in Eq. (15.294), as originally claimed.

Solution of Differential Equations

15.1.2.2.3

MacLaurin’s Method of Solution

The general concept of series solution of a second-order linear differential equation, of the type labeled as Eq. (15.194), simplifies greatly if m is set equal to zero in Eq. (15.196), thus giving rise to ∞

ai x i ;

y=

15 341

i=0

Eq. (15.341) mimics Taylor’s expansion of y around x = 0, also known as MacLaurin’s series, provided that ai is conceptualized as

i 1d y i dx i x = 0

(i = 1, 2, …) and a0 as y|x

= 0,

all

consistent with Eq. (12.861) for a = 0. This approach follows essentially the same steps as Frobenius’ method discussed above – but the indicial equation(s), meant for calculation of m, is obviously no longer required; despite the apparent simplicity, this method conveys a single solution, say, y1{x}, so an effort will be required a posteriori to produce an independent solution, say, y2{x}, often based thereon. After redefining the first-order derivative of Eq. (15.341) as dy = dx



iai x i−1

15 342

i=1

instead of Eq. (15.197), as well as the second-order derivative, viz. d2 y = dx2



i i− 1 ai x i−2

15 343

i=2

stemming from Eq. (15.342) and to be used en lieu of Eq. (15.198), one may utilize Eqs. (15.341)–(15.343) to transform the homogeneous equivalent of Eq. (15.194) to V

tj x j j=0



N

i i−1 ai x i− 2 +

i=2

pj x j j=0



iai x i − 1 +

i=1

M

qj x j j=0



ai x i = 0

15 344

i=0

– a relationship simpler than Eq. (15.199); the algorithm of multiplication of polynomials may again be invoked to produce V



ai tj i i− 1 x i + j −2 +

j=0 i=2

N



ai pj ix i + j −1 +

j=0 i=1

M



ai qj x i + j = 0

15 345

j=0 i=0

that likewise resembles Eq. (15.200). For convenience, one should swap summations in each term, viz. ∞

V

ai tj i i− 1 x i + j −2 +

i=2 j=0



N

ai pj ix i + j −1 +

i=1 j=0



M

ai qj x i + j = 0;

15 346

i=0 j=0

after changing the counting variable i in the first inner summation to i + 2, and likewise to i + 1 in the second inner summation, Eq. (15.346) becomes ∞

V

ai + 2 tj i + 2 i + 1 x i + 2 + j− 2 +

i=0 j=0 ∞ M

+

N

i=0 j=0

ai qj x i=0 j=0



i+j

=0

ai + 1 pj i + 1 x i + 1 + j− 1 15 347

643

644

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

As before, one may to advantage label the maximum of N, M, and V as W – thus implying = pW = 0, qM+1 = qM+2 = = qW = 0, and tV+1 = tV+2 = = again that pN+1 = pN+2 = tW = 0; hence, Eq. (15.347) will break down to ∞



W

a i + 2 tj i + 1 i + 2 x i + j +

i=0 j=0

W

ai + 1 pj i + 1 x i + j +

i=0 j=0



W

ai qj x i + j = 0,

i=0 j=0

15 348 thus allowing condensation of all three double summations as ∞

W

ai + 2 tj i + 1 i + 2 + ai + 1 pj i + 1 + ai qj x i + j = 0

15 349

i=0 j=0

– where xi+j was meanwhile factored out. Comparative inspection of Eq. (15.349) with Eq. (15.210) unfolds its much simpler form – which is, anyway, obtainable from Eq. (15.210) after setting m = 0. Since the coefficient of every power of x must be nil so as to guarantee that the left-hand side of Eq. (15.349) is identically equal to its right-hand side, one should equate to zero the coefficients of xi+j for the various combinations of i and j leading to each value for the sum i + j – as illustrated in Table 15.2. The coefficients i and j of all terms in the germane ascending diagonal of this table are then to be taken into account, when searching for the coefficient of the corresponding xi+j term; for instance, the coefficient of x3 should be obtained as the sum of the coefficients of x0+3, x1+2, x2+1, and x3+0 – as listed along the diagonal connecting j = 4 to i = 4. Independent terms should accordingly abide to a0 + 2 t0 0 + 1 0 + 2 + a0 + 1 p0 0 + 1 + a0 q0 x0 + 0 = 0,

15 350

obtained from Eq. (15.349) with i = j = 0; Eq. (15.350) readily becomes 2a2 t0 + a1 p0 + a0 q0 = 0

15 351

Table 15.2 Combination of values of i = 0, 1, 2, … and j = 0, 1, …, W to produce specific exponents for xi+j. i j

0

1

2

3

4

5



0

x0

x1

x2

x3

x4

x5



1

x1

x2

x3

x4

x5

x6



2

x

2

3

4

5

x

6

x

x

7



3

x3

x4

x5

x6

x7

x8



4

x

4

5

6

7

x

8

x

x

9



5

x5

x6

x7

x8

x9

x10



















x x

x x

Solution of Differential Equations

following division of both sides by x0, or, after isolating a2, a2 a 0 , a1 = −

p0 q0 a1 − a0 2t0 2t0

under the hypothesis of t0 Eq. (15.351) should lead to a1 a0 = −

q0 a0 p0

15 352 t0 0

0 − and similar to Eq. (15.221), since m = 0. Otherwise, 15 353

t0 = 0 p0 0

consistent with Eq. (15.227), and again due to m = 0; or even a0 = 0

t0 = 0 p0 = 0 q0 0

,

15 354

while t0 = p0 = q0 would leave a0 unconstrained – see Eq. (15.351). With regard to linear terms, one finds that a 0 + 2 t1 0 + 1 0 + 2 + a0 + 1 p1 0 + 1 + a0 q1

x0 + 1 +

a 1 + 2 t0 1 + 1 1 + 2 + a1 + 1 p0 1 + 1 + a1 q0

x1 + 0 = 0

15 355

based on Eq. (15.349) after setting (i,j) equal to (0,1) or (1,0) at a time – where straightforward simplification yields 2a2 t1 + a1 p1 + a0 q1 + 6a3 t0 + 2a2 p0 + a1 q0 = 0,

15 356

again upon division of both sides by x; a3 may the be isolated as p 0 + t1 p1 + q0 q1 a2 − a1 − a0 a3 = − 3t0 6t0 6t0 when t0

15 357

0. Insertion of Eq. (15.352) leads, in turn, to

a3 =

p 0 + t1 p 0 q0 p1 + q0 q1 a1 + a0 − a1 − a0 , 3t0 2t0 2t0 6t0 6t0

15 358

where elimination of parentheses and condensation of terms alike give rise to a3 a0 ,a1 =

1 p0 p0 + t1 1 q 0 p 0 + t1 − p1 − q0 a1 + − q1 a0 6t0 6t0 t0 t0

15 359 t0 0

– in agreement with Eq. (15.235), on account of m = 0. In the case of t0 = 0, Eq. (15.356) should be replaced by 2a2 t1 + a1 p1 + a0 q1 + 2a2 p0 + a1 q0 = 0,

15 360

so a2 would in that case have to satisfy a2 = −

p1 + q0 q1 a1 − a0 2 p 0 + t1 2 p0 + t1

t0 = 0 p0 + t1 0

;

15 361

645

646

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

combination with Eq. (15.353) unfolds a2 = −

p1 + q0 2 p 0 + t1



q0 q1 a0 a0 − p0 2 p0 + t1

15 362 t0 = 0 p0 + t1 0

that simplifies to a2 a0 =

1 2 p0 + t1

q0 p1 + q0 − q1 a0 p0

15 363 t0 = 0 p0 + t1 0

– as expected, in view of Eq. (15.242) for m = 0. A third possibility based on Eq. (15.360) is materialized as a1 p1 + a0 q1 + a1 q0 = 0,

15 364

when t0 = 0 and p0 + t1 = 0 − in which case a1 would satisfy a1 a0 = −

q1 a0 p1 + q0

15 365 t0 = 0 p0 + t 1 = 0 p0 + q 0 0

whereas the extra condition p1 + q0 = 0 would simplify Eq. (15.364) to merely a0 = 0

t0 = 0 p 0 + t1 = 0 p 1 + q0 = 0 q1 0

15 366

– with a0 remaining undefined if q1 = 0. Following a similar rationale stemming from Eq. (15.349), quadratic terms should abide to a0 + 2 t2 0 + 1 0 + 2 a 1 + 2 t1 1 + 1 1 + 2 x0 + 2 + x1 + 1 + a0 + 1 p2 0 + 1 + a0 q2 + a 1 + 1 p1 1 + 1 + a 1 q 1 15 367 a 2 + 2 t0 2 + 1 2 + 2 2+0 x + =0 + a2 + 1 p0 2 + 1 + a2 q0 as per combination of values of (i,j) as depicted in Table 15.2 – i.e. (0,2), (1,1), and (2,0); after dividing out x2 in both sides Eq. (15.367) degenerates to 2a2 t2 + a1 p2 + a0 q2 + 6a3 t1 + 2a2 p1 + a1 q1 + 12a4 t0 + 3a3 p0 + a2 q0 = 0

15 368

Further algebraic manipulation now permits isolation of a4 as 2t1 + p0 2t2 + 2p1 + q0 p2 + q1 q2 a0 a3 − a2 − a1 − a0 15 369 4t0 12t0 12t0 12t0 provided that t0 = 0, whereas combination with Eqs. (15.352) and (15.359) produces a4 = −

a4 =−

p0 + 2t1 1 p0 p0 + t1 1 q0 p0 + t1 − p1 − q0 a1 + −q1 a0 6t0 6t0 4t0 t0 t0

;

2p1 + q0 + 2t2 p0 q0 p2 + q1 q2 − a1 − a0 − a1 − a0 − 12t0 2t0 2t0 12t0 12t0 15 370

Solution of Differential Equations

upon elimination of parentheses and lumping of similar terms, Eq. (15.370) yields

a4 a0 , a1 = −



p2 + q1 p0 2p1 + q0 + 2t2 − 12t0 2t0 12t0 a1 1 p0 + 2t1 p0 p0 + t1 + − p1 − q0 6t0 4t0 t0 15 371

q2 q0 2p1 + q0 + 2t2 − 12t0 2t0 12t0 a0 1 p0 + 2t1 q0 p0 + t1 + −q1 6t0 4t0 t0 t0 = 0

The remaining possibilities of t0 = 0 and 3p0 + 6t1 0 yielding a3 ≡ a3{a0, a1, a2}, t0 = 3p0 + 6t1 = 0 and 2p1 + q0 + 2t2 0 yielding a2 ≡ a2{a0, a1}, t0 = 3p0 + 6t1 = 2p1 + q0 + 2t2 = 0 and p2 + q1 0 yielding a1 ≡ a1{a0}, and finally t0 = 3p0 + 6t1 = 2p1 + q0 + 2t2 = p2 + q1 = 0 and q2 0 yielding a0 = 0 may then be worked out – with t0 = 3p0 + 6t1 = 2p1 + q0 + 2t2 = p2 + q1 = q2 = 0 leaving a0 undetermined. It should be emphasized once again that a single solution y1{x} is produced via MacLaurin’s approach – despite its frequent bearing of two unknown constants, a0 and a1. Therefore, one should somehow generate another independent solution y2{x}, so as to end up with a general solution of the form conveyed by Eq. (15.135) – from which the general solution of Eq. (15.193) will become accessible after calculating Y{x} via Eq. (15.110), with f1{x} and f2{x} obtained from Eqs. (15.129) and (15.131), respectively. 15.1.2.2.4

Independent Solutions

Despite its versatility, Frobenius’ method does not always convey two independent solutions; this occurs, namely, when the roots of the indicial equation differ by an integer – including the possibility of being identical (in which case, said integer is nil). Furthermore, MacLaurin’s method of solution conveys, by default, a single solution – so the need often arises to find a second independent solution y2{x} when a first solution, y1{x}, has already been made available. In the particular case of Bessel’s equation as per Eq. (15.247), one may revisit the result conveyed by Eq. (15.310) as dy2 1 dy1 W − y2 = , dx y1 dx y1

15 372

upon division of both sides by y1; insertion of Eq. (15.325) then supports transformation to dy2 1 dy1 C − y2 = , xy1 dx y1 dx

15 373

where Eq. (15.338) may, in turn, be invoked to write dy2 x 1 dy1 x 2 − y2 x = y1 x πx y1 x dx dx

15 374

647

648

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– since C has been found to be independent of both x and ν. Once y1{x} is known, integration of Eq. (15.374) should in principle provide y2{x} – to be utilized as independent solution to Eq. (15.247); as expected, y2{x} ≡ Y0{x} can be obtained via Eq. (15.374) after setting y1{x} ≡ J0{x}. However, this approach is not restricted to Bessel’s equation; it indeed extends to the homogeneous counterpart of any second-order ordinary differential equation of the form labeled as Eq. (15.193), after reformulation following Eq. (15.309); |W| would accordingly be given by Eqs. (15.320) and (15.338). One may in alternative postulate y2 x

f x y1 x

15 375

at large – where f {x} remains to be found; chain differentiation of Eq. (15.375), with regard to x, entails dy2 df dy = y1 + f 1 , dx dx dx

15 376

and a second differentiation step is in order, viz. d 2 y2 d 2 f df dy1 df dy1 d 2 y1 2 = 2 y1 + dx dx + dx dx + f dx dx2 dx

15 377

that may be condensed to d 2 y2 d 2 f df dy1 d 2 y1 2 = 2 y1 + 2 dx dx + f dx dx dx2

15 378

After inserting Eqs. (15.375), (15.376), and (15.378) in Eq. (15.195) – without the restriction of a polynomial form for the differential coefficients, one obtains d2 f df dy1 d 2 y1 df dy y1 + f 1 + Q f y1 = 0 +P 2 y1 + 2 dx dx + f dx dx dx2 dx

T

15 379

as condition to be enforced to assure that y2{x} is a solution of the original second-order differential equation. Elimination of parentheses in Eq. (15.379), followed by factoring out of df/dx and f give rise to T

d2 f dy1 df d 2 y1 dy1 2 y1 + 2T dx + Py1 dx + T 2 + P dx + Qy1 f = 0; dx dx

15 380

since y1 is, by hypothesis, a root of Eq. (15.195), the content of the parenthesis preceding f must be nil, so one may proceed to T

d2 f dy1 df 2 y1 + 2T dx + Py1 dx = 0 dx

15 381

The change of variable suggested by Eq. (15.49) may be recovered here, i.e. w

df , dx

15 382

which readily implies dw dx

d2 f dx2

15 383

Solution of Differential Equations

following differentiation of both sides; one may accordingly transform Eq. (15.381) to T

dw dy y1 + 2T 1 + Py1 w = 0, dx dx

15 384

with the aid of Eqs. (15.382) and (15.383). Integration of Eq. (15.384) is now possible via separation of variables, viz. dw =− w

2T

dy1 + Py1 dx dx, Ty1

15 385

in general agreement with Eq. (15.35) pertaining to a linear first-order ordinary differential equation; integration of the left-hand side as indicated, complemented with division of both numerator and denominator (and subsequent splitting) of the kernel in the right-hand side by T unfold ln w = B −

2

dy1 dx + P dx T y1

15 386

– with B denoting a constant. Equation (15.386) degenerates to w = C1 exp −

2

d ln y1 P + dx T dx

15 387

after taking exponentials of both sides, setting C1 ≡ eB, and recalling Eqs. (10.40) and (10.205); insertion of Eq. (15.382) transforms Eq. (15.387) to df = C1 exp − dx

2

d ln y1 P + dx , T dx

15 388

which integrates to df = C1 exp −

2

d ln y1 P + dx dx T dx

15 389

via separation of variables, or, equivalently, f = C2 + C1 exp −

2

d ln y1 P + dx dx, T dx

15 390

with C2 denoting a second arbitrary constant. Insertion of Eq. (15.390) transforms Eq. (15.375) to y2 x = y1 x

C2 + C1 exp −

2

d ln y1 x P x + dx T x

dx dx

15 391

– thus indicating that y2{x} depends on P{x}/T{x} but not on Q{x}; the general solution of Eq. (15.195) may accordingly be coined as y = A1 y1 + A2 y1 C2 + C1 exp −

2

d ln y1 P + dx dx , T dx

15 392

649

650

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where y1 is to be factored out as y = y1 D1 + D2 exp −

2

d ln y1 P + dx dx T dx

15 393

– where A1 and A2C2 were lumped into (constant) D1, while (constant) D2 represents A2C1. Note that the homogeneous counterparts of Eqs. (15.107) and (15.193) are functionally equivalent to each other, provided that the coefficients of dy/dx and y of the former are replaced by P{x}/T{x} and Q{x}/T{x}, respectively; which form to use is just a matter of convenience – knowing that the result will be the same, in view of P{x}/T{x} explicit in Eq. (15.393). 15.1.3

Linear Higher Order

The linear first-order ordinary differential equation labeled as Eq. (15.22) and the linear second-order ordinary differential equation labeled as Eq. (15.107) or Eq. (15.193) are but special cases of the general linear ordinary differential equation of order n, viz. d ny d n− 1 y + n + an−1 x dx dx n −1

an x

+ a1 x

dy + a0 x y = f x ; dx

15 394

here a0{x}, a1{x}, a2{x}, …, an{x}, and f{x} denote given functions of x – knowing that some of which, or even all may be constant. In the particular case where all coefficients of Eq. (15.394) are indeed constant and f{x} is nil, i.e. an

d ny d n− 1 y + n + an−1 dx dx n− 1

+ a1

dy + a0 y = 0, dx

15 395

one may postulate a solution of the type y = exp λx

15 396

– as done previously via Eq. (15.142), when handling Eq. (15.138); sequential differentiation of Eq. (15.396) yields d iy = λ i exp λx ; i = 1,2, …, n 15 397 dx i in analogy to Eqs. (15.143) and (15.144), so combination with Eq. (15.395) gives rise to an λ n exp λx + an −1 λ n− 1 exp λx +

+ a1 λ exp λx + a0 exp λx = 0

15 398

Once exp{λx} has been factored out, Eq. (15.398) simplifies to an λ n + an− 1 λ n− 1 +

+ a1 λ + a0 exp λx = 0

15 399

– thus implying that Eq. (15.396) will be a root of Eq. (15.395) as long as an λ n + an− 1 λ n− 1 +

+ a1 λ + a0 = 0,

15 400

since exp{λx} 0; Eq. (15.400) is usually termed characteristic equation, see also Eq. (15.146) – and will in general hold n solutions λ1, λ2, …, λn for being an nth degree polynomial. The most general solution of Eq. (15.395) will accordingly look like an algebraic combination of the type n

Ai exp λi x

y= i=1

15 401

Solution of Differential Equations

– following Eq. (15.396) as template, and assuming that all λi’s are distinct; in agreement with Eq. (15.158), this guarantees independence between every two individual solutions due to the non-nil Wronskian conveyed by Eq. (15.154). The (constant) Ai’s are still unknown at this stage, but play the role of the n arbitrary constants required by the solution of an nth order ordinary differential equation. If a root exhibits multiplicity n > 1, i.e. 15 402 λ1 = λ2 = = λn = λ, then the corresponding solutions A1exp{λx}, A2exp{λx}, …, Anexp{λx} are dependent of each other – because any Aiexp{λx} may be obtained from another Aj iexp{λx} (i,j = 1, 2, …, n) by mere multiplication of the latter by Ai/Aj; consequently, Eq. (15.401) will fail to provide the general solution to Eq. (15.395). Under such circumstances, one should consider lumping all n expected (independent) solutions into y = ξ x exp λx

15 403

– where ξ{x} denotes a putative function of x awaiting to be determined; sequential differentiation of Eq. (15.403), followed by factoring out of exp{λx} produce dy dξ dξ = exp λx + ξλ exp λx = + λξ exp λx dx dx dx d2 y d2 ξ dξ dξ = 2 2 + λ dx exp λx + dx + λξ λ exp λx dx dx d2 ξ dξ 2 2 + 2λ dx + λ ξ exp λx dx

=

,

(15.404)

d3 y d3 ξ d 2 ξ 2 dξ d2 ξ dξ 2 3 = 3 + 2λ 2 + λ dx exp λx + 2 + 2λ dx + λ ξ λ exp λx dx dx dx dx d3 ξ d2 ξ dξ + 3λ2 + λ3 ξ exp λx 2 + 3λ dx dx dx2

=

which may be condensed to i

d iy = exp λx dx i

j=0

j i λ i− j d ξ ; i = 1,2, …, n j dx j

15 405

Insertion of Eq. (15.405) supports transformation of Eq. (15.395) to n

j n λ n−j d ξ + a exp λx n− 1 j dx j

an exp λx j=0 1

+ a1 exp λx j=0

1 λ1−j d ξ + a exp λx 0 j dx j j

n −1 j=0 0 j=0

j n −1 λ n−1 −j d ξ + j dx j j 0 λ0− j d ξ = 0 j dx j

,

15 406

where division of both sides by exp{λx} permits simplification to n− 1 j n λ n− j d ξ + a n −1 j dx j

n

an j=0

j=0

1

+ a1 j=0

j n − 1 λ n− 1−j d ξ + j dx j

j 1 λ1−j d ξ + a 0 λ0 ξ = 0 0 0 j dx j

15 407

651

652

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

The various derivatives in Eq. (15.407) may be regrouped as d nξ an n n dx n + an− 1

d n−1 ξ n −1 + an n n − 1 λ dx n− 1 n −1

+ an− 2

n− 2 n −2 n −1 λ + a n λ2 d ξ + + an−1 n n −2 n−2 n −2 dx n −2 k n n− k d ξ + λ k dx k

+ ak

k k +1 k +2 2 + ak + 1 λ + ak + 2 λ + k k k

+ a1

1 2 3 2 + a2 λ + a3 λ + 1 1 1

+ an

n n −1 dξ λ 1 dx

+ a0

0 1 2 2 + a1 λ + a2 λ + 0 0 0

+ an

n n λ ξ=0 0

+ an

,

15 408

or else an

d nξ dx n

+ an −1 + an nλ

d n−1 ξ dx n− 1

+ an−2 + an −1 n − 1 λ + an + ak + ak + 1 k + 1 λ + ak + 2 + a1 + 2a2 λ + 3a3 λ2 + + a0 + a1 λ + a2 λ2 +

d n−2 ξ n λ2 + n−2 dx n−2 k +2 2 λ + k

+ nan λ n −1 + an λ n ξ = 0

+ an

15 409 n n− k λ k

dkξ + dx k

dξ dx

in view of Pascal’s triangle – see Table 2.1. The coefficient of the last term is nil by virtue of Eq. (15.400), so Eq. (15.409) degenerates to an

d nξ dx n

+ an −1 + an nλ

d n−1 ξ dx n− 1

+ an− 2 + an− 1 n −1 λ + an + an− 3 + an− 2 n −2 λ + an −1 + ak + ak + 1 k + 1 λ + ak + 2 + a1 + 2a2 λ + 3a3 λ2 +

d n− 2 ξ n λ2 n−2 dx n −2 d n −3 ξ n −1 2 n + λ + an λ3 n −3 n− 3 dx n −3 k +2 2 λ + k

+ nan λ n

dξ =0 dx

+ an

n n−k d k ξ λ + k dx k

15 410

Solution of Differential Equations

At this stage, it is convenient to revisit the expressions for the coefficients, αi, of the various powers in Eq. (2.183) pertaining to an n-th degree polynomial factorized as per its roots r1, r2, …, rn, viz. α n = an αn−1 = − an αn−2 = an

n

ri

i=1 n− 1 n

ri rj i=1j=i+1 n−2 n −1

;

15 411

n

αn−3 = − an

ri rj rk i=1 j=i+1k =j+1

if the roots are all identical (i.e. r1 = r2 = α n = an αn−1 = − an αn−2 = an

= rn = r), then Eq. (15.411) becomes

n

r

i=1 n− 1 n

r2

,

i=1j=i+1 n−2 n −1

n

αn−3 = − an

15 412

r3

i=1 j=i+1k =j+1

in view of the definition of power. Further algebraic rearrangement of Eq. (15.412), including splitting of products and condensation of factors alike, has it that αn = an = − 1

0

n an 0 n−0

αn− 1 = −an n − 1 + 1 r = −an nr = − 1 αn− 2 = an

n −1

n an r 1 n− 1 n− 1

n−1

n − i + 1 + 1 r 2 = an

i=1

= an

1

n− i r 2 = an i=1

n− 1

i r2

n− i=1

i=1

n−1 n 2 n n− 1 2 n r = −1 2 r = an an r 2 n n− 1 − 2 2 2 n− 2

αn− 3 = −an

n−2 n−1

n − j + 1 + 1 r 3 = − an

i=1j=i+1 n−2

= −an i=1

n −1 j=i+1

n−

n −1 j=1

j−

i

j

i=1 n−2

n n −1 − ni − i=1

r3

j=1

n n −1 − i + 1 + 1 −

= −an

n− j r 3

i=1j=i+1

n−2

= −an

n− 2 n −1

n− 1 n i i + 1 + 2 2

n n− 1 i2 i + + r3 2 2 2

r3

653

654

Mathematics for Enzyme Reaction Kinetics and Reactor Performance n− 2

= −an i=1

n n−1 1 −2n i2 3 n n− 1 i+ r = − an + 2 2 2 2

n− 2

1+ i=1

1 −2n n− 2 1 n− 2 2 3 i+ i r 2 i=1 2 i=1

n n −1 1 −2n n − 2 n −1 1 n−2 n− 1 2n −3 n −2 + + = −an 2 2 2 2 6 n n− 1 n − 2 n −1 n −2 n n− 1 n −2 + − 2 4 2 = −an r3 n n− 1 n − 2 n−1 n−2 + − 6 4 n n −1 n −2 3 n 3 = −an r = −1 an r 3 3 3 n−3 …

r3

15 413 – at the expense of the formula for the sum of unity with itself (i.e.

j

1 = j), the sum of the i=1

j j+1 ), and the sum of the squares of the first integers (i.e. 2 j j + 1 2j + 1 1− 2n 1 2n− 3 3 j 2 = −n and = n− . ), …, coupled with splitting as i = 1i = 2 2 2 2 6 Inspection of the various α’s in Eq. (15.13) points at j i = 1i =

first integers (i.e.

αn −i = − 1

i

n a λ i ; i = 0,1, …, n − 1 i n

15 414

as general formula, after r has been replaced by λ (as dummy variable) and with the aid of Eq. (2.240). One obvious consequence of Eq. (15.414) is that the second coefficient in Eq. (15.410) becomes 15 415 an−1 + an nλ = − 1 n 1 an λ + an nλ = −nan λ + nan λ = 0, for i = 1; the third coefficient in Eq. (15.410) turns to 2 an − 2 + an −1 n−1 λ+an n n −2 λ = −1

= = =

2

n a λ2 + −1 n a λ n−1 λ+a n 2 n n−2 λ 2 n 1 n

n n n−1 an λ2 − an λ2 + an 2 n −2 1 n−1 n −2 n n n − + 2 n−2 n −2 2 n− 2 n n − n−2 n−2

n n− n−2

λ2 ,

an λ2

an λ2 = 0

15 416 after invoking Eqs. (2.240) and (15.414); the fourth relationship in Eq. (15.410) similarly gives rise to n −1 2 n λ3 λ + an n− an−3 + an−2 n − 2 λ + an− 1 3 n −3 = −1

3

n a λ3 + −1 3 n

2

n a λ2 n − 2 λ + −1 2 n

n n −1 2 3 a λ λ + an n n −3 λ 1 n n −3

Solution of Differential Equations

=− − =− −

n n an λ3 + 3 n−3 2 n −2 n 1 n−1

n−3

n − 2 a n λ3

n−1 n − 1 − n −3

a n λ3 +

n− 3

n n− n− 3

a n λ3

n n −1 n −2 n n− 1 n − 2 a n λ3 + a n λ3 3 2 n n −1 n −2 n n− 1 n − 2 a n λ3 + a n λ3 = 0 2 3

(15.417) at the expense again of Eqs. (2.240) and (15.414). In other words, all polynomials in parenthesis in Eq. (15.410) turn nil – see e.g. Eqs. (15.415)–(15.417) and extrapolation for i ≥ 4, so Eq. (15.410) reduces to just an

d nξ = 0, dx n

15 418

or else d nξ dx n for an

d d n− 1 ξ =0 dx dx n −1

15 419

0; Eq. (15.419) is suitable for integration via separation of variables, viz. d

d n− 1 ξ = 0dx, dx n− 1

15 420

that breaks down to d n− 1 ξ dx n− 1

d d n−2 ξ = B1 dx dx n− 2

15 421

– where B1 denotes a constant. Further integration of Eq. (15.421) is possible again via separation of variables, viz. d

d n− 2 ξ = B1 dx, dx n− 2

15 422

which degenerates to d n− 2 ξ = B1 x + B2 dx n− 2

15 423

– where B2 denotes another constant; this process may be iterated another n – 2 times, until attaining ξ = B1 or else

x n−1 x n− 2 + B2 + n −1 n−2

+ Bn −1 x + Bn ,

15 424

655

656

Mathematics for Enzyme Reaction Kinetics and Reactor Performance n

ξ x = i=1

Bi x n −i n −i

15 425

Since Bi/(n − i)! is but an (arbitrary) constant itself, say, Ci, Eq. (15.425) may be simplified to n

ξ x =

Ci x n−i

15 426

i=1

– so the corrective function in Eq. (15.403) reduces to a polynomial; the final solution will thus look like n

y = exp λx

Ci x n−i ,

15 427

i=1

in the case λ denotes a root of Eq. (15.400) bearing multiplicity n – where n arbitrary constants are again at stake, accompanying as many independent functions. Note that Eq. (15.183) represents a particular case of Eq. (15.427), characterized specifically by n = 2. If Eq. (15.394) takes instead the form an x n

n− 1 d ny y n−1 d + a x + n −1 n n −1 dx dx

+ a1 x

dy + a0 y = 0 dx

15 428

with a0, a1, …, an denoting constants, then Euler’s differential equation results; an auxiliary independent variable may to advantage be defined as x

ez

15 429

Every derivative of y with regard to x may now be calculated via the chain differentiation rule, see Eq. (10.205) with z as intermediate variable, coupled with the rule of differentiation of the inverse function, as conveyed by Eq. (10.247). For the first-order derivative, one accordingly finds dy dy dy dz dz = = , dx dz dx dx dz

15 430

where combination with Eq. (15.429) unfolds dy dy dz 1 dy = = dx e z x dz

15 431

– so one readily concludes on the equivalence d 1d y= y, dx x dz

15 432

in differential operator form. The second-order derivative will then ensue as d2 y d dy 1 d 1 dy = , 15 433 2 dx dx x dz x dz dx after taking Eqs. (15.431) and (15.432) into account; the rule of differentiation of a product then supports

Solution of Differential Equations

d 2 y 1 d 1 dy 1 d dy + = dx2 x dz x dz x dz dz

,

15 434

which transforms to d2 y 1 1 dx dy 1 d 2 y − + = x2 dz dz x dz2 dx2 x

15 435

– again via the chain differentiation rule, coupled to the definition of higher-order derivatives. In view of Eq. (15.429), one obtains d2 y 1 1 z dy 1 d 2 y 2 = x − x2 e dz + x dx dz2

15 436

from Eq. (15.435), or else d2 y 1 1 dy 1 d 2 y 1 d 2 y dy − + = = x − x2 dz x dz2 x2 dz2 dz dx2 x

15 437

at the expense of Eq. (15.429), where 1/x was meanwhile factored out. Multiplication of both sides of Eq. (15.437) by x2 finally yields x2

d 2 y d 2 y dy = − , dx2 dz2 dz

15 438

which may be rephrased as x2

d2 d2 d d d y = 2 2 − dz y = dz dz − 1 y dx dz

15 439

in operator notation. The third-order derivative can similarly be ascertained as d3 y dx3

d d2 y 1 d d2 y 1 d 1 d 2 y dy = = − dx dx2 x dz dx2 x dz x2 dz2 dz

,

15 440

after taking Eqs. (15.432) and (15.437) into account; the classical theorems of differentiation then permit transformation to d3 y 1 d 1 = dx3 x dz x2

d 2 y dy 1 d d2 y d dy 2 − dz + x2 dz 2 − dz dz dz dz

15 441

– where the chain differentiation rule and the definition of higher-order derivatives justify d3 y 1 2 dx d 2 y dy 1 d3 y d2 y − 3 = x − x3 dz 2 − dz + x2 dx dz dz3 dz2

15 442

After recalling Eq. (15.429) and the distributive property, one gets d3 y 1 2 z d 2 y dy 1 d3 y 1 d2 y − 3 = x − x3 e 2 − dz + x2 dx dz dz3 x2 dz2

15 443

from Eq. (15.442), where insertion again of Eq. (15.429) permits simplification to

657

658

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

d3 y 1 2 d 2 y 2 dy 1 d 3 y 1 d 2 y − = x + x + − x3 dz2 x3 dz x2 dz3 x2 dz2 dx3 x

,

1 d2 y dy d 3 y d 2 y = 3 −2 2 + 2 + 3 − 2 x dz dz dz dz

15 444

together with elimination of inner parenthesis and factoring out of 1/x2 afterward; condensation of terms alike, coupled with multiplication of both sides by x3 yield x3

d3 y d3 y d2 y dy = −3 2 + 2 dz dx3 dz3 dz

15 445

– which may be rewritten in composite operator form as x3

d3 d3 d2 d d2 d y = − 3 + 2 − y = dz dx3 dz3 dz2 dz2 dz d d d −1 −2 y = dz dz dz

d −2 y dz

15 446

One concludes, in general, that xi

d iy = dx i

i−1 j=0

d −j y dz

15 447

based on Eqs. (15.432), (15.439), (15.446), and beyond – where the right-hand side is a differential expression characterized by constant coefficients; therefore, Eq. (15.428) simplifies to + a4

d4 y d3 y d2 y dy d3 y d2 y dy −3 2 + 2 4 −6 3 + 11 2 − 6 dz + a3 dz dz dz dz dz3 dz

+ a2

d 2 y dy dy 2 − dz + a1 dz + a0 y = 0 dz

15 448

Algebraic combination of similar differential terms transforms Eq. (15.448) to d4 y + dz4

− 6a4 + a3

+

+ a4

+

− 6a4 + 2a3 −a2 + a1

d3 y + dz3

+ 11a4 − 3a3 + a2

dy + a0 y = 0 dz

d2 y dz2 ;

15 449

hence, an equation of the type of Eq. (15.395) results, with solution given by Eq. (15.401) or Eq. (15.427). The characteristic equation associated with Eq. (15.449) reads indeed +

+ a 4 λ4 +

−6a4 + a3 λ3 +

+

− 6a4 + 2a3 − a2 + a1 λ + a0 = 0

+ 11a4 −3a3 + a2 λ2

;

15 450

once its solutions λi (i = 1, 2, …, n) are found – and should they all be distinct from each other, one may resort to Eq. (15.401) to write n

Ai exp λi z

y= i=1

15 451

Solution of Differential Equations

– bearing constants Ai (i = 1, 2, …, n) and parameters λi ≡ λi {a0, a1, …, an}. If Eq. (15.429) is revisited as z = ln x,

15 452

one will obtain n

Ai exp λi ln x

y=

15 453

i=1

from Eq. (15.451) – or else n

y=

Ai exp lnx λi

15 454

i=1

as permitted by Eq. (2.25); one may reformulate Eq. (15.454) to n

y=

Ai x λ i

15 455

i=1

because an exponential has been composed with its inverse function, i.e. a logarithm. Instead of changing the independent variable x to z as suggested per Eq. (15.429), one may proceed otherwise and postulate a specific form for the solution of Eq. (15.428) inspired by Eq. (15.455), namely xλ;

y

15 456

here λ denotes a parameter to be determined. Sequential differentiation of Eq. (15.456), with regard to x, gives rise to dy = λx λ −1 dx d2 y d dy = λ λ − 1 x λ −2 2 dx dx dx d3 y dx3



d iy dx i … n

d y dx n

d d2 y = λ λ− 1 λ − 2 x λ −3 dx dx2

15 457

i− 1 d d i−1 y = λ− j x λ −i dx dx i− 1 j=0 n −1 d d n −1 y = λ −j x λ −n dx dx n −1 j=0

– based on Eq. (10.29). Insertion of Eqs. (15.456) and (15.457) supports transformation of Eq. (15.428) to an x n

n −1 j=0

λ− j x λ −n + an −1 x n− 1

n−2

λ −j x λ − n + 1 +

+ a1 xλx λ− 1 + a0 x λ = 0,

j=0

15 458

659

660

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where lumping of powers of x yields an x λ

n− 1

λ − j + an −1 x λ

j=0

n− 1

λ− j +

+ a1 x λ λ + a0 x λ = 0;

15 459

j=0

division of both sides by xλ

0 is now in order, viz.

n− 2

λ −j + an− 1

an

n −2

j=0

λ −j +

+ a1 λ + a0 = 0,

15 460

j=0

where the λi’s in Eq. (15.455) are but the solutions of Eq. (15.460). Due to the nature of this very equation – i.e. an nth-degree polynomial in λ made equal to zero, a total of n solutions are expected, say, λ1, λ2, …, λn; hence, the general solution of Eq. (15.428) will look like Eq. (15.455), i.e. a linear combination of all solutions of the form entailed by Eq. (15.456). To further illustrate the equivalence between the two modes of solution, one may turn the last terms of Eq. (15.460) explicit, viz. + a4 λ λ− 1 λ − 2 λ −3 + a3 λ λ −1 λ −2 + a2 λ λ− 1 + a1 λ + a0 = 0;

15 461

performance of the products as indicated unfolds + a4 λ4 − 6λ3 + 11λ2 − 6λ + a3 λ3 −3λ2 + 2λ + a2 λ2 − λ + a1 λ + a0 = 0 15 462 – which, after collapsing terms alike, retrieves Eq. (15.450). Since the very same characteristic equation is found, one concludes that the solutions obtained via either approach do coincide with each other.

15.2

Partial Differential Equations

Partial differential equations share the general form x1 ,x2 ,…, xN , u, F

∂u ∂u ∂u , , …, , ∂x1 ∂x2 ∂xN

∂ 2u ∂ 2u ∂ 2u , , …, , 2 ∂x1 ∂x1 ∂x2 ∂x1 ∂xN

= 0,

15 463

∂ 2u ∂ 2u ∂ 2u ∂ 2u ,…, , …, , …, ,… ∂x22 ∂x2 ∂xN ∂xN ∂x1 ∂xN2 where x1, x2, …, xN denote independent variables, u denotes the dependent variable, and F denotes a set of algebraic operations. Classical approaches to enzyme reaction engineering have chiefly dealt with first-order partial differential equations, i.e. F x,u,∇u = 0,

15 464

with x denoting from 2 up to 4 variables (3 of space plus 1 of time) and ∇ entailing (at least) one space variable but possibly including time as well.

Solution of Differential Equations

In practice, however, most problems encompass just two variables – one space coordinate (say, x denoting length, or equivalently V denoting volume spanned) and the time coordinate (say t) – thus implying presence of just ∂u/∂x and ∂u/∂t as partial derivatives; their differential coefficients are normally of the form a{x,t} and b{x,t}, respectively, so the result turns into a linear partial differential equation – while the dependent variable itself may appear in nonlinear form, say, f{x,t,u}. The aforementioned realization justifies an approach focused on (nonhomogeneous) quasilinear first-order partial differential equations – in which case Eq. (15.464) degenerates to a x,t

∂u x, t ∂u x, t + b x, t = f x, t,u ; ∂x ∂t

15 465

one will further assume that u

u x, t

15 466

denotes a smooth, or continuous solution of Eq. (15.465). The set of points (x,t,u) satisfying Eq. (15.465) define indeed a solution surface, S – designated so because Eq. (15.466) describes a three-dimensional surface of u on x and t; Eq. (15.466) often appears as u x, t − u = 0

15 467

The (postulated) smoothness of said solution means that S holds a tangent plane at each point (x,t,u) on said surface, abiding to ∂u ∂u dx + dt − du = 0 ∂x ∂t

15 468

– obtained from Eq. (15.467) after applying the concept of differential as per Eq. (10.6); as well as a normal vector n at each such point, possessing coordinates n

n

∂u ∂u , ,−1 ∂x ∂t

15 469

Consider now a three-dimensional curve C, parametrically described by C

C x s ,t s ,u s

15 470

with s denoting a parameter, which plays the role of solution curve for the set of equations dx = a x, t , ds

15 471

dt = b x, t , ds

15 472

du = f x, t, u ; ds

15 473

and

if T denotes a vector tangent to C at (x,t,u), then such a vector should be characterized by T

T a, b, f

15 474

661

662

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

The scalar product of T by n reads T n=a

∂u ∂u ∂u ∂u + b + f −1 = a + b − f ∂x ∂t ∂x ∂t

15 475

based on Eqs. (15.469) and (15.474), after retrieving the property of the scalar product conveyed by Eq. (3.95); in view of Eq. (15.465), one may simplify Eq. (15.475) to T n=0

15 476

Recalling the definition of scalar product as per Eq. (3.53), one may rewrite Eq. (15.476) as T

n cos ∠T , n = 0,

15 477

which readily implies cos ∠T , n = 0

15 478 π because both T and n possess some finite length, by hypothesis; hence, ∠T ,n = , i.e. 2 T ⊥ n. In other words, T lies on the plane tangent to the surface S, which, in turn, guarantees that C lies on S; solution curves C of Eq. (15.465) lie, in general, on the solution surface S associated therewith – and are called characteristic curves. Furthermore, if C is a solution curve for Eqs. (15.471)–(15.473), then Eq. (15.465) may be redone as dx ∂u x, t dt ∂u x, t ∂u x, t ∂u x, t du + = a x, t + b x,t = f x, t,u = ds ∂x ds ∂t ds ∂x ∂t 15 479 – so the partial differential equation reduces to an ordinary differential equation along C; in general, the equations for C must be solved as a system. On the other hand, a vector-valued function V, defined as V

V P x, t, u , Q x, t, u , R x, t, u

,

15 480

is called a vector field if P, Q, and R are all smooth functions – and, in addition, 0; a space curve C as per Eq. (15.470) is termed integral curve, or P2 + Q2 + R2 trajectory for V, provided that V is tangent to C at every point, i.e. if dx = P x, t, u , ds

15 481

dt = Q x, t,u , ds

15 482

du = R x,t, u ds

15 483

and

– where elimination of ds among Eqs. (15.481)–(15.483) unfolds dx dt du = = P Q R

15 484

Solution of Differential Equations

Furthermore, a function φ, defined by φ x, t, u ,

φ

15 485

is said to be a first integral for V as per Eq. (15.480) when P x,t, u

∂φ ∂φ ∂φ + Q x, t, u = R x, t, u ; ∂x ∂t ∂u

15 486

the trajectories C for V will accordingly be found by representing C as the intersection of level surfaces of first integrals. For instance, a first integral of V{x,t,u−1} should abide to x

∂φ ∂φ 1 ∂φ +t = ∂x ∂t u ∂u

15 487 V x, t , u − 1

in view of Eq. (15.486), so a solution φ may be obtained via dx dt = = udu, x t

15 488

consistent with Eq. (15.484); remember the equivalence between Eq. (15.488) and dx dt = , x t

15 489

coupled with dx = udu x

15 490

as system of ordinary differential equations. Equation (15.489) integrates to dx dt = , x t

15 491

ln x = k1 + ln t

15 492

or else with k1 denoting an arbitrary constant – which simplifies to x k2 = , t after taking exponentials of both sides (with k2 by the same token, Eq. (15.490) gives rise to dx = udu x

15 493

e k1 ) and isolating arbitrary constant k2;

15 494

upon integration, which readily yields ln x = k3 +

u2 2

15 495

that can be recoined as k3 = ln x −

u2 , 2

15 496

663

664

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with k3 denoting another arbitrary constant. Therefore, φ1 x,t, u =

x t

15 497

based on Eq. (15.493), together with φ2 x,t, u = ln x−

u2 2

15 498

stemming from Eq. (15.496) play the role of first integrals for V{x,t,u−1}; for any smooth function F of two variables, φ3 x,t, u

F φ1 x, t, u , φ2 x, t, u

15 499

will also represent a first integral of V – where φ3 may be viewed as an implicit representation of the most general solution to the departing partial differential equation. Nevertheless, one is seldom interested in finding the said most general solution – but rather in obtaining a solution to Eq. (15.465) that satisfies an additional algebraic condition of the form = g x, t ,

u x, t Ci x

x ξ ,t

15 500

t ξ

where the initial curve Ci and function g are given in advance; Eq. (15.500) has classically been labeled as Cauchy’s initial condition, so the problem of finding a solution to Eqs. (15.465) and (15.500) is termed Cauchy’s initial value problem. An example of practical interest encompasses a

∂u x, t ∂u x, t + = −cuH at −x ∂x ∂t

15 501

with a and c denoting appropriate parameters, describing e.g. a plug flow reactor operated with bulk fluid at constant velocity a, bringing about a pseudo-first order chemical reaction described by kinetic constant c and catalyzed by a (stable) enzyme immobilized therein; an initial condition may read u x,0 = u0 ,

15 502

corresponding to reactor initially filled with substrate at concentration u0, and fed thereafter with substrate also at concentration u0. The vector fields read, in this case, V

V a,1, − cuH at − x

,

15 503

so a first integral φ should satisfy ∂φ ∂φ ∂φ + = cuH at − x ; ∂x ∂t ∂u the characteristic curves will then be given by the solutions of a

15 504

dx = a, ds

15 505

dt = 1, ds

15 506

Solution of Differential Equations

and du = −cuH at − x ds

15 507

Upon elimination of ds between Eqs. (15.505) and (15.506), one gets dx = dt a

15 508

that integrates to dx = a dt;

15 509

this is equivalent to writing x = k1 + at,

15 510

or else φ1 x, t = x− at = k1

15 511

after isolation of constant k1, so a first integral arises. If ds is instead eliminated between Eqs. (15.505) and (15.507), then one gets dx du =− a cuH at − x

15 512

that promptly leads −

c du H at − x dx = a u

15 513

after integrating via separation of variables; Eq. (15.513) readily becomes c − xH at − x = k2 + ln u a

15 514

with constant k2, which may be rewritten as x x φ2 t,u = − c H t − − ln u = k2 a a

15 515

– thus giving rise to another first integral. The most general solution, φ3, for Eq. (15.501) will thus look like x x φ3 x, t, u = F x −at, −c H t − a a

− ln u

15 516

using Eq. (15.499) as template, and following combination with Eqs. (15.511) and (15.515). Reformulation of Eq. (15.516) unfolds x x ln u = −c H t − − g x−at a a

15 517

665

666

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where g denotes an arbitrary function (to be determined) of x − at, which is easily retrieved after equating the two constants k1 and k2 with the aid of Eqs. (15.511) and (15.515), i.e. x x − c H t − − ln u = k2 g k1 = g x− at 15 518 a a After taking exponentials of both sides, Eq. (15.517) produces x x u = exp −c H t − − g x−at a a

;

15 519

utilization of the initial condition conveyed by Eq. (15.502) turns Eq. (15.519) to u0 = u

x x = exp −c H t − − g x− at a a t =0

;

15 520

t =0

elementary algebraic manipulation unfolds merely u0 = exp −g x

15 521

Upon taking logarithms of both sides and isolating g{x} afterward, Eq. (15.521) becomes g x = − ln u0 ;

15 522

since x can be taken as dummy variable in Eq. (15.522), one obtains g x −at = − ln u0

15 523

after having x replaced by x – at, due to constancy of the function at stake. The final solution may be attained via insertion of Eq. (15.523) in Eq. (15.519), according to x x u = exp − c H t − − − ln u0 a a

= exp

x x ln u0 −c H t − a a

15 524

After simplifying Eq. (15.524) to x x u = exp ln u0 exp − c H t − a a

15 525

– which may also appear as u

x x x ,t = u0 exp −c H t − a a a

,

15 526

a bivariate functionality is obtained for u. Therefore, the original (uniform) distribution of u, i.e. u0, appears (downwardwise) distorted as fluid advances along the reactor – but becomes independent of time after an initial transient period, of amplitude x/a for each position x, has elapsed.

667

16 Vector Calculus

Vector calculus is a branch of mathematics concerned with differentiation (and integration, as inverse thereof ) of vector fields, especially in the three-dimensional Euclidean space; it was originally developed by American mathematician and physicist Josiah W. Gibbs and English (self-taught) electrical engineer and mathematician Oliver Heaviside, toward the end of the nineteenth century – although most notation and terminology had been established by J. Gibbs, together with Edwin B. Wilson in the beginning of that century. Its relevance arises from the fact that three independent directions exist in space (unlike happens in time) – which may to advantage be handled together, via a more concise nomenclature that facilitates expression and derivation; coupled with realization that gradients in space are experienced by many physicochemical quantities, and are thus of the utmost relevance for process engineers.

16.1 Rectangular Coordinates 16.1.1

Definition and Representation

One of the pillars of analytical geometry is modeling of space concepts via analytical equations – which resort to vectors, in view of their being characterized by three components representing behaviors along as many independent directions in space; to avoid three-dimensional representations – difficult to plot and even more difficult to interpret, such vectors should to advantage be expressed by a set of three numerical coordinates. Space referentials are therefore needed – with one origin, bearing coordinates (0,0,0) by default, and three independent directions of space (i.e. not lying on the same plane); the simplest way to accomplish this is using a set of three mutually perpendicular axes, each one associated with a vector of unit length (and oriented along its direction) – as done in a Cartesian plot (see Fig. 16.1). The set of unit vectors ix, iy, and iz are fixed in space, and are accordingly defined by constant directions, besides their constant lengths (equal to unity). Despite the universal nature of the above form of analytical representation – via coordinates (x,y,z), other forms of representation have met with higher success when some particular form of axis- or point-based symmetry is present; this is the case of cylindrical or spherical coordinates, respectively (which will be discussed later) – useful in that they take advantage of said symmetries to simplify mathematical description. Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

668

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Figure 16.1 Graphical representation of a general point P, via rectangular coordinates (x,y,z) in the three-dimensional space, and its projection P∗ on the y0z plane – with indication of the canonical base of unit, mutually orthogonal vectors for said system of coordinates, i.e. (ix, iy, iz).

x

P (x,y,z) x z

z

P*

iz

ix 0 iy

y

y

Definition of Nabla Operator, ∇

16.1.2

In attempts to mathematically describe situations possessing a physicochemical significance, it is often necessary to resort to a vector u of coordinates (ux,uy,uz); u is thus termed a vector function, and may be represented in a Cartesian plot as u

ix ux + iy uy + iz uz

16 1

in parallel to Eq. (3.1). As mentioned above, ix, iy, and iz denote unit vectors in the x-, y-, and z-direction, respectively, as highlighted in Fig. 16.1, while ux, uy, and uz denote the components of u in those directions; dependence of u on x, y, and z comes from association of vector u to point P(x,y,z). This type of vector function, i.e. u ≡ u{x,y, z}, is a more complete caracterization of the behavior of entities in the three-dimensional space; in fact, it goes well beyond use of a scalar function, associated also with every point P(x, y, z) and denoted hereafter as ϕ{x, y, z} – but which consubstantiates a single number that itself does not hold x-, y-, and z-projections as descriptors. If ux, uy, and uz depend, in addition to x, y, and z, also on time variable t (i.e. ux ≡ ux{x, y, z, t}, uy ≡ uy{x, y, z, t}, and uz{x, y, z, t}), then one may define the time derivative of u as ∂u ∂t

∂ ix ux + iy uy + iz uz ∂t

16 2

with the aid of Eq. (16.1); since ix, iy, and iz are, by hypothesis, fixed both in space and time, Eq. (16.2) reduces to ∂u ∂t

ix

∂uy ∂ux ∂uz + iy + iz ∂t ∂t ∂t

16 3

as per the rule of differentiation of a product. A related result pertains to computation of the scalar product of u by ∂u/∂t – based on the definitions conveyed by Eqs. (16.1) and (16.3), viz. u

∂u = ix ux + iy uy + iz uz ∂t

recalling Eq. (3.95), one finds that

ix

∂uy ∂ux ∂uz + iy + iz ; ∂t ∂t ∂t

16 4

Vector Calculus

u

∂uy ∂u ∂ux ∂uz = ux + uy + uz ∂t ∂t ∂t ∂t

16 5

The rules of differentiation of a power as per Eq. (10.29) and of a composite function as per Eq. (10.205) allow one to write ∂u 1 ∂ux 2 1 ∂uy 1 ∂uz 2 = + + ∂t 2 ∂t 2 ∂t 2 ∂t 2

u

16 6

based on Eq. (16.5), so factoring out of (constant) ½ and (operator) ∂/∂t leads to u

∂u 1 ∂ = u 2 + uy 2 + uz 2 ; ∂t 2 ∂t x

16 7

the content of the parenthesis may be written as the scalar product of u by itself, viz. u

∂u 1 ∂ = ∂t 2 ∂t

ix ux + iy uy + iz uz

ix u x + iy u y + iz u z

16 8

with the aid of Eq. (3.95) applied backward – or, in condensed form, u

∂u 1 ∂ = u u ∂t 2 ∂t

16 9

During formulation of several engineering probems, it is frequently necessary to associate, to every point (x,y,z) of a region R of space, either a scalar function ϕ(x,y,z) or a vector function u{x,y,z}, as emphasized above; in the former case, a scalar field is said to exist in R, whereas a vector field in R arises in the latter situation. If a scalar function exists and is differentiable in all points of space, then its gradient, grad ϕ, can be defined as grad ϕ

ix

∂ϕ ∂ϕ ∂ϕ + iy + iz ; ∂x ∂y ∂z

16 10

∂ϕ ∂ϕ ∂ϕ denotes the x-component of grad ϕ, and likewise for the meaning of and ∂x ∂y ∂z regarding the y- and z-component, respectively. Note that Eq. (16.10) defines itself a vector function, even though the original ϕ{x,y,z} was a scalar function; Eq. (16.10) may be rewritten at the expense of the general concept of (mathematical) operator, viz.

hence,

grad ϕ = ix

∂ ∂ ∂ ϕ, + i y + iz ∂x ∂y ∂z

16 11

where the term in parenthesis represents a vector differential operator applied as a whole to ϕ. The seminal importance of this operator in vector calculus led to launching of a special symbol, ∇ – termed nabla or del, and accordingly defined as ∇

ix

∂ ∂ ∂ + iy + i z ; ∂x ∂y ∂z

16 12

Eq. (16.10) may thus be alternatively coined as grad ϕ

∇ϕ

16 13

669

670

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– the x-, y- and z-components of which are the first partial derivatives of ϕ(x,y,z) with regard to x, y, and z, respectively. Hence, a gradient measures the rate of change of a scalar function to a vector, and thus maps a scalar to a vector field. An inverse problem is to associate a scalar function to a vector field – as happens with the divergence of u, or div u for short, defined as div u

∂ux ∂uy ∂uz + + ; ∂x ∂y ∂z

16 14

after recalling the concept of differential operator and the properties of scalar product, Eq. (16.14) can appear as div u

∂ ∂ ∂ ∂ ∂ ∂ ux + uy + uz = ix + iy + iz ∂x ∂y ∂z ∂x ∂y ∂z

ix ux + iy uy + iz uz , 16 15

so Eq. (16.14) proves equivalent to div u

∇ u

16 16

with the aid of Eq. (16.12). Remember that the components of u in the x-, y-, and z-directions (i.e. ux, uy, and uz, respectively) are, in general, functions of x, y, and z as location descriptors of the application point of u; divergence accordingly measures the magnitude of a source or sink, at a given point, in the associated vector field – and thus maps a vector to a scalar field. This is why div u, as a scalar function, is calculated as the sum of the derivatives of ux, uy, and uz with regard to x, y, and z, respectively. A similar concept encompasses associating a vector to a tensor, through calculation of its vectorial divergence – according to div τ

ix

∂τxy ∂τyy ∂τzy ∂τxx ∂τyx ∂τzx ∂τxz ∂τyz ∂τzz + iy + iz , + + + + + + ∂x ∂y ∂z ∂x ∂y ∂z ∂x ∂y ∂z 16 17

at the expense of Eq. (5.1); this equation can be rewritten as div τ = ix

∂τyx ∂τyx ∂τzx ∂τxx ∂τzx ∂τxx ∂τyx ∂τzx ∂τxx +0 +0 +0 + +0 +0 +0 + ∂x ∂x ∂x ∂y ∂y ∂y ∂z ∂z ∂z

+ iy

∂τxy ∂τyy ∂τzy ∂τxy ∂τyy ∂τzy ∂τxy ∂τyy ∂τzy +0 +0 +0 + +0 +0 +0 + ∂x ∂x ∂x ∂y ∂y ∂y ∂z ∂z ∂z

+ iz

∂τyz ∂τyz ∂τzz ∂τxz ∂τzz ∂τxz ∂τyz ∂τzz ∂τxz +0 +0 +0 + +0 +0 +0 + ∂x ∂x ∂x ∂y ∂y ∂y ∂z ∂z ∂z 16 18

or, in a more condensed version, div τ

∂ ∂ ∂ ix + i y + iz ∂x ∂y ∂z

ix ix τxx + ix iy τxy + ix iz τxz + iy ix τyx + iy iy τyy + iy iz τyz + iz ix τzx + iz iy τzy + iz iz τzz

16 19

Vector Calculus

This is possible at the expense of Eqs. (3.70) and (3.95), complemented by realization that the scalar product of ix by iyix, iyiy, iyiz, izix, iziy, and iziz is nil – as well as the scalar product of iy by ixix, ixiy, ixiz, izix, iziy, and iziz; and likewise the scalar product of iz by ixix, ixiy, ixiz, iyix, iyiy, and iyiz. For instance, the matrix (or dyadic) product of ix by iy reads 0 1 0 1 0 0 1 0 = 0 0 0 as per Eq. (4.47), so the scalar product of iy by ixiy should ix i y 0 0 0 0 0 1 abide

to

iy ix iy = iy ix iy

1

0

0

0

iy

that

degenerates

to

iy ix iy =

0 1 + 1 0 + 0 0 iy = 0iy = 0, in view of Eq. (3.95) and (4.20); a similar reasoning may be applied to every other combination explicitly referred to above, and may also be used to write ix ix iξ = ix ix iξ =1iξ = iξ , iy iy iξ = iy iy iξ =1iξ = iξ , and iz iz iξ = iz iz iξ =1iξ = iξ , with ξ ≡ x, y, z for subscript. One finally finds that Eq. (16.19) is equivalent to

div τ

∂ ∂ ∂ i x + i y + iz ∂x ∂y ∂z

φxx τxx + φxy τxy + φxz τxz + φyx τyx + φyy τyy + φyz τyz

16 20

+ φzx τzx + φzy τzy + φzz τzz

with the aid of Eqs. (5.2) and (5.15)–(5.23), while being consistent with Eq. (5.32) – so one eventually obtains div τ

∇ τ

16 21

Finally, one can define the rotational of u – another vector function denoted as curl u, given by curl u

ix

∂uy ∂ux ∂uz ∂uy ∂ux ∂uz + iy + iz ; − − − ∂y ∂z ∂z ∂x ∂x ∂y

16 22

it accordingly encompasses the cross derivatives of every component of u, multiplied in pairs by the unit vector containing the missing direction of space. After recalling once again the definition of a differential operator, and that of a second-order determinant as per Eq. (1.10), one may redo Eq. (16.22) to curl u

∂ ∂ ∂ ∂ ∂ ∂ ix ∂y ∂z −iy ∂x ∂z + iz ∂x ∂y ; ux uz uy uz ux uy

16 23

Eq. (16.23) mimics, in turn, expansion of a third-order determinant via Laplace’s theorem applied along the first row as conveyed by Eq. (6.41), viz. ix iy iz curl u

∂ ∂ ∂ ∂x ∂y ∂z ux uy uz

16 24

671

672

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

In view of the definition of vector product as per Eq. (3.145), one may rewrite Eq. (16.24) as curl u

ix

∂ ∂ ∂ + i y + iz × ix ux + iy uy + iz uz , ∂x ∂y ∂z

16 25

where insertion of Eqs. (16.1) and (16.12) unfolds curl u

∇×u

16 26

The rotational measures the tendency to rotate about a point in a vector field, so it maps a vector field to another vector field. To confirm the form of Eq. (16.26), one may retrieve Eq. (3.111) as definition of vector product of two vectors to write ∇×u



u sin ∠∇, u n,

16 27

where n denotes unit vector perpendicular to the plane defined by vectors ∇ and u – such that ∇, u, and n form a right-handed system; while sin{∠∇, u} denotes sine of angle formed by ∇ and u. Since iζ and iζ are collinear, the sine of their angle is nil, so Eq. (3.136) supports iζ ∇ζ × iζ uζ = iζ × iζ ∇ζ uζ = 0;

16 28

conversely, vectors iζ∇ζ and iξuξ being perpendicular to each other yields iζ ∇ζ × iξ uξ = iζ × iξ ∇ζ uξ = iχ ∇ζ uξ

16 29

for combination of (ζ,ξ) yielding χ, and likewise for combination of (ξ,χ) yielding ζ and of (χ,ζ) yielding ξ based on Eqs. (3.137)–(3.139) – as long as the right-hand order ζ ξ χ is followed. If such an order were not kept, then Eqs. (3.140)–(3.142) would justify iξ ∇ ξ × iζ u ζ = i ξ × iζ ∇ ξ u ζ = − iχ ∇ ξ u ζ

16 30

for combination of (ξ,ζ) yielding χ, and similarly for combination of (χ,ξ) yielding ζ, and of (ζ,χ) yielding ξ as per Eqs. (3.140)–(3.142). After retrieval of Eqs. (16.25) and (16.26) as ∇×u

ix

∂ ∂ ∂ + i y + iz × ix ux + iy uy + iz uz , ∂x ∂y ∂z

16 31

the distributive property as conveyed by Eq. (3.131) supports transformation to ∇ × u = ix

∂ ∂ ∂ × ix ux + ix × iy uy + ix × iz uz ∂x ∂x ∂x

+ iy

∂ ∂ ∂ × ix ux + iy × iy uy + iy × iz uz ∂y ∂y ∂y

+ iz

∂ ∂ ∂ × ix ux + iz × iy uy + iz × iz uz ∂z ∂z ∂z

∂uy ∂ux ∂uz + ix × iy + ix × i z = ix × ix ∂x ∂x ∂x + i y × ix

∂uy ∂ux ∂uz + iy × iy + iy × iz ∂y ∂y ∂y

+ iz × ix

∂uy ∂ux ∂uz + i z × iy + iz × iz ∂z ∂z ∂z

;

16 32

Vector Calculus

since ix × ix, iy × iy, and iz × iz are nil as a consequence of Eq. (16.28), one may simplify Eq. (16.32) to ∇ × u = ix × i y

∂uy ∂uy ∂uz ∂ux ∂uz ∂ux + ix × iz + iy × i x + iy × iz + iz × ix + iz × iy , ∂x ∂x ∂y ∂y ∂z ∂z 16 33

whereas introduction of Eqs. (16.29) and (16.30) yields ∇ × u = iz

∂uy ∂uy ∂uz ∂ux ∂uz ∂ux − iy − iz + ix + iy −ix ∂x ∂x ∂y ∂y ∂z ∂z

16 34

Algebraic rearrangement of Eq. (16.34), via factoring out of ix, iy, and iz, finally leads to ∇ × u = ix

∂uy ∂ux ∂uz ∂uy ∂ux ∂uz + iy + iz − − − ∂y ∂z ∂z ∂x ∂x ∂y

16 35

that fully retrieves Eq. (16.22), as expected in view of Eq. (16.26). One consequently realizes that curl u is a vector function, with components in the x-, y-, and z-directions equal to the differences between the two other partial derivatives of u not involving the given direction, viz. uz with regard to y and uy with regard to z, or ux with regard to z and uz with regard to x, or uy with regard to x and ux with regard to y, respectively.

16.1.3

Algebraic Properties of ∇

In what concerns a sum of scalar functions, ϕ and ψ, the expression for its gradient may be obtained from Eqs. (16.11) and (16.13) as ∇ ϕ + ψ = ix

∂ ∂ ∂ + iy + i z ∂x ∂y ∂z

ϕ+ψ ,

16 36

which is equivalent to writing ∇ ϕ + ψ = ix

∂ ∂ ∂ ϕ + ψ + iy ϕ + ψ + iz ϕ+ψ ∂x ∂y ∂z

16 37

based on the linearity of the differential operator; the rule of differentiation of a sum as per Eq. (10.106) permits, in turn, transformation of Eq. (16.37) to ∇ ϕ + ψ = ix

∂ϕ ∂ψ ∂ϕ ∂ψ ∂ϕ ∂ψ + iy + iz + + + ∂x ∂x ∂y ∂y ∂z ∂z

16 38

or, equivalently, ∇ ϕ + ψ = ix

∂ϕ ∂ψ ∂ϕ ∂ψ ∂ϕ ∂ψ + ix + iy + iy + iz + iz ∂x ∂x ∂y ∂y ∂z ∂z

16 39

673

674

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– due to the distributive property of multiplication of a scalar by a vector, see Eq. (3.51). One may now rearrange Eq. (16.39) as ∇ ϕ + ψ = ix

∂ϕ ∂ϕ ∂ϕ ∂ψ ∂ψ ∂ψ + iy + iz + iy + iz + ix ∂x ∂y ∂z ∂x ∂y ∂z

16 40

based on the commutativity and associativity of addition of vectors as per Eqs. (3.22) and (3.29), where ϕ and ψ may be taken out to yield ∇ ϕ + ψ = ix

∂ ∂ ∂ ∂ ∂ ∂ + i y + iz ϕ + ix + i y + iz ψ, ∂x ∂y ∂z ∂x ∂y ∂z

16 41

thus allowing use of operator notation; Eq. (16.41) may finally appear as ∇ ϕ + ψ = ∇ϕ + ∇ψ,

16 42

in view of Eq. (6.12) – so the basic rule of differentiation of a sum of scalars labeled as Eq. (10.106) essentially applies, with iteration to N functions ϕi leading to N

N

ϕi =

∇ i=1

∇ϕi ,

16 43

i=1

in parallel to Eq. (10.107). In the case of the product of a scalar function by another scalar function, one obtains ∇ ϕψ = ix

∂ ∂ ∂ + i y + iz ∂x ∂y ∂z

ϕψ

16 44

for the gradient thereof using Eq. (16.11) as template and Eq. (16.12) as aid; this is equivalent to write ∇ ϕψ = ix

∂ ∂ ∂ ϕψ + iy ϕψ + iz ϕψ ∂x ∂y ∂z

16 45

again due to linearity of the differential operator, whereas the rule of differentiation of a product as per Eq. (10.119) has it that ∇ ϕψ = ix ψ

∂ϕ ∂ψ ∂ϕ ∂ψ ∂ϕ ∂ψ +ϕ +ϕ +ϕ + iy ψ + iz ψ ∂x ∂x ∂x ∂x ∂x ∂x

16 46

Elimination of parentheses unfolds ∇ ϕψ = ix ψ

∂ϕ ∂ψ ∂ϕ ∂ψ ∂ϕ ∂ψ + ix ϕ + iy ψ + iy ϕ + iz ψ + iz ϕ , ∂x ∂x ∂x ∂x ∂x ∂x

16 47

which can be algebraically rearranged to yield ∇ ϕψ = ix ψ

∂ϕ ∂ϕ ∂ϕ ∂ψ ∂ψ ∂ψ + ix ϕ + iy ψ + iz ψ + iy ϕ + iz ϕ ∂x ∂y ∂z ∂x ∂y ∂z

16 48

Vector Calculus

owing again to commutativity and associativity of the addition of vectors. Equation (16.48) can be rewritten as ∇ ϕψ = ψ ix

∂ ∂ ∂ ∂ ∂ ∂ + iy + i z ϕ + ϕ i x + iy + i z ψ ∂x ∂y ∂z ∂x ∂y ∂z

16 49

after factoring ϕ and ψ out – where a shorter notation is possible at the expense of Eq. (16.12), i.e. ∇ ϕψ = ψ∇ϕ + ϕ∇ψ

16 50

that mimics Eq. (10.119); obviously that Eq. (16.50) can be extended to a product of N factor functions as N

N

N

ϕj ∇ϕi ,

ϕi =

∇ i=1

i=1

16 51

j=1 j i

in much the same way Eq. (10.124) was obtained. In the case of a composite scalar function, ϕ{ζ}, where ζ denotes the nested scalar function, one can calculate its gradient using the classical rule of sequential derivation conveyed by Eq. (10.205); in fact, ∇ϕ ζ = ix

∂ ∂ ∂ + iy + i z ϕ ζ ∂x ∂y ∂z

16 52

as per Eq. (16.12) is equivalent to subdividing each scalar derivative operator as ∇ϕ ζ = ix

∂ζ d ∂ζ d ∂ζ d + iy + iz ϕ ζ ∂x dζ ∂y dζ ∂z dζ

16 53

The common differential operator in Eq. (16.53), as well as ζ may now be factored out as ∇ϕ ζ =

ix

∂ζ ∂ζ ∂ζ d ϕ ζ = + iy + iz ∂x ∂y ∂z dζ

ix

∂ ∂ ∂ + i y + iz ζ ∂x ∂y ∂z

d ϕ ζ ; dζ

16 54 due to commutativity of the multiplication of a scalar by a vector as per Eq. (3.33), one finds that Eq. (16.54) can be converted to ∇ϕ ζ =

d ϕ ζ dζ

ix

∂ ∂ ∂ + i y + iz ζ, ∂x ∂y ∂z

16 55

which simplifies to ∇ϕ ζ =

dϕ ∇ζ dζ

16 56

due to Eq. (16.12) – thus unfolding a form of chain partial differentiation rule that extends Eq. (10.205) to multiple dimensions.

675

676

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

16.1.4

Multiple Products Involving ∇

16.1.4.1 Calculation of (∇ ∇)ϕ

Another quantity of practical interest is the divergence – as put forward via Eq. (16.14), of the gradient of a scalar function – as given by Eq. (16.10); upon combination of Eqs. (16.10), (16.12), (16.13), and (16.16), one gets div grad ϕ = ∇ ∇ϕ = ix

∂ ∂ ∂ + i y + iz ∂x ∂y ∂z

ix

∂ϕ ∂ϕ ∂ϕ + iy + iz ∂x ∂y ∂z

16 57

or, in view of Eq. (3.95), div grad ϕ =

∂ ∂ϕ ∂ ∂ϕ ∂ ∂ϕ + + ∂x ∂x ∂y ∂y ∂z ∂z

16 58

that breaks down to div grad ϕ =

∂2 ϕ ∂2 ϕ ∂2 ϕ + + ∂x2 ∂y2 ∂z2

16 59

The aforementioned quantity is usually known as Laplacian of ϕ, lap ϕ; after rewriting Eqs. (16.58) and (16.59) in operator form, viz. lap ϕ

∂ ∂ ∂ ∂ ∂ ∂ + + ∂x ∂x ∂y ∂y ∂z ∂z

ϕ=

∂2 ∂2 ∂2 ϕ + + ∂x2 ∂y2 ∂z2

16 60

and combining with Eq. (16.12), one obtains lap ϕ

∇ ∇ ϕ

16 61

with the aid of Eq. (3.95) again. Using shorthand notation, Eq. (16.61) may instead be coined as lap ϕ

∇2 ϕ,

16 62

as long as ∇2

∇ ∇

16 63

One may finally write ∇2

∂2 ∂2 ∂2 + 2 + 2, 2 ∂x ∂y ∂z

16 64

based on Eqs. (16.60), (16.62), and (16.63); hence, Laplacian is a result of composition of divergence and gradient operations – and maps a scalar to another scalar field, with the latter given by the sum of the second-order derivatives with regard to x, y, and z. 16.1.4.2 Calculation of (∇ ∇)u

In view of the definition conveyed by Eq. (16.64) and the equivalence labeled as Eq. (16.63), one finds that

Vector Calculus

∇ ∇u

∇2 u =

∂2 ∂2 ∂2 ∂2 ∂2 ∂2 u = + + + + ∂x2 ∂y2 ∂z2 ∂x2 ∂y2 ∂z2

ix u x + i y u y + i z u z

∂ 2 uy ∂ 2 uy ∂ 2 uy ∂ 2 ux ∂ 2 ux ∂ 2 ux + i + + + 2 + 2 y ∂x2 ∂y2 ∂z2 ∂x2 ∂y ∂z

= ix + iz

,

∂ 2 uz ∂ 2 uz ∂ 2 uz + 2 + 2 ∂x2 ∂y ∂z 16 65

together with the aid of Eq. (16.1) – where the constancy of unit vectors was already taken into account, as well as Eq. (10.106); remember that ux ≡ ux{x, y, z}, uy ≡ uy{x, y, z}, and uz ≡ uz{x, y, z} in general (as stressed before). Equation (16.65) may be reformulated as ∇ ∇ u = ix ∇2 ux + iy ∇2 uy + iz ∇2 uz ,

16 66

thus emphasizing (in condensed form) the vector nature of ∇ u – with (scalar) ∇2ux, ∇2uy, and ∇2uz serving as x-, y-, and z-projections thereof. 2

16.1.4.3 Calculation of ∇ (ϕu)

The scalar product of nabla by the product of a scalar, ϕ, by a vector function, u, is given by ∇ ϕu = ix

∂ ∂ ∂ + iy + i z ∂x ∂y ∂z

ϕ ix u x + iy u y + iz u z ,

16 67

with the aid of Eqs. (16.1) and (16.12); according to the distributive property of the multiplication of a scalar by a vector as conveyed by Eq. (3.44), and the corresponding commutative and associative properties with regard to scalars as per Eqs. (3.33) and (3.38), respectively, one obtains ∇ ϕu = ix

∂ ∂ ∂ + iy + i z ∂x ∂y ∂z

ix ϕux + iy ϕuy + iz ϕuz

16 68

The property of the scalar product labeled as Eq. (3.95) supports transformation of Eq. (16.68) to ∇ ϕu =

∂ ∂ ∂ ϕux + ϕuy + ϕuz ; ∂x ∂y ∂z

16 69

Eq. (10.119) supports further transformation to ∇ ϕu = ux

∂uy ∂ϕ ∂ux ∂ϕ ∂ϕ ∂uz +ϕ +ϕ +ϕ + uy + uz , ∂x ∂y ∂z ∂x ∂y ∂z

16 70

since ϕ ≡ ϕ{x, y, z} at large. Equation (16.70) can be rearranged to read ∇ ϕu = ϕ ix ix

∂uy ∂ux ∂uz + ϕ i y iy + ϕ iz iz ∂x ∂y ∂z

∂ϕ ∂ϕ ∂ϕ + iy iy u y + iz iz uz + ix ix u x ∂x ∂y ∂z

16 71

677

678

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

based on Eq. (3.93), whereas Eq. (3.94) permits further transformation to ∇ ϕu = ϕ ix ix

∂uy ∂uy ∂ux ∂uz ∂ux + ϕ ix iy + ϕ ix iz + ϕ iy ix + ϕ iy iy ∂x ∂x ∂x ∂y ∂y

+ ϕ iy iz

∂uy ∂uz ∂ux ∂uz ∂ϕ + ϕ iz ix + ϕ iz iy + ϕ iz iz + ix ix ux ∂y ∂z ∂z ∂z ∂x

∂ϕ ∂ϕ ∂ϕ ∂ϕ ∂ϕ + ix iy uy + ix iz uz + iy ix ux + iy iy uy + iy iz uz ∂x ∂x ∂y ∂y ∂y + iz ix ux

;

∂ϕ ∂ϕ ∂ϕ + iz iy uy + iz iz uz ∂z ∂z ∂z

16 72 after factoring out common ϕix, ϕiy, and ϕiz, as well as ix∂ϕ/∂x, iy∂ϕ/∂y, and iz∂ϕ/∂z, while taking advantage of the concept of differential operator, Eq. (16.72) becomes ∇ ϕu = ϕix

∂ ∂ ix ux + iy uy + iz uz + ϕiy ix u x + iy u y + iz u z ∂x ∂y

+ ϕiz

∂ ∂ϕ ix ux + iy uy + iz uz + ix ux + iy uy + iz uz ix ∂z ∂x

+ ix ux + iy uy + iz uz iy

16 73

∂ϕ ∂ϕ + ix ux + iy uy + iz uz iz ∂y ∂z

The distributive properties conveyed by Eqs. (3.70) and (3.74) may now be invoked to condense Eq. (16.73) to ∇ ϕu = ϕ ix

∂ ∂ ∂ + iy + i z ∂x ∂y ∂z

+ ix ux + iy uy + iz uz

ix u x + iy u y + iz u z ∂ ∂ ∂ ϕ i x + iy + i z ∂x ∂y ∂z

,

16 74

along with factoring out of ϕ and recalling the concept of operator – which is equivalent, in view of Eqs. (16.1) and (16.12), to ∇ ϕu = ϕ ∇ u + u ∇ϕ ;

16 75

isolation of u (∇ϕ) readily generates u ∇ϕ = ∇ ϕu − ϕ ∇ u ,

16 76

another quite useful formula. If function ϕ turns out to be a constant scalar κ, then Eq. (16.75) simplifies to ∇ κu = κ ∇ u + u ∇κ

16 77

– where the last term in the right-hand side vanishes (as ∇κ = 0) to just leave ∇ κu = κ ∇ u ;

16 78

Vector Calculus

Eqs. (16.75) and (16.78) resemble, in functional form, Eqs. (10.119) and (10.120), respectively.

16.1.4.4 Calculation of ∇ (∇ × u)

After recalling Eq. (16.12) as definition of del, as well as Eq. (16.35) encompassing ∇ × u in terms of vector components, one may write ix ∇ ∇ × u = ix

∂ ∂ ∂ + iy + i z ∂x ∂y ∂z

∂uz ∂uy − ∂y ∂z

+ iy

∂ux ∂uz − ∂z ∂x

+ iz

∂uy ∂ux − ∂x ∂y

16 79

The feature of a scalar product conveyed by Eq. (3.95) justifies transformation of Eq. (16.79) to ∇ ∇×u =

∂ ∂uz ∂uy ∂ ∂ux ∂uz ∂ ∂uy ∂ux + + , − − − ∂x ∂y ∂z ∂y ∂z ∂x ∂z ∂x ∂y

16 80

where Eq. (10.106) and the definition of second-order derivative support extra transformation to ∇ ∇×u =

∂ ∂uz ∂ ∂uy ∂ ∂ux ∂ ∂uz ∂ ∂uy ∂ ∂ux − + − + − ∂x ∂y ∂x ∂z ∂y ∂z ∂y ∂x ∂z ∂x ∂z ∂y

∂ 2 uz ∂ 2 uy ∂ 2 ux ∂ 2 uz ∂ 2 uy ∂ 2 ux − + − + − = ∂x∂y ∂x∂z ∂y∂z ∂y∂x ∂z∂x ∂z∂y

;

16 81 assuming that ux, uy, and uz are continuous functions, one may apply Eq. (10.65) to get ∇ ∇×u =

∂ 2 uz ∂ 2 uy ∂ 2 ux ∂ 2 uz ∂ 2 uy ∂ 2 ux − + − + − ∂x∂y ∂x∂z ∂y∂z ∂x∂y ∂x∂z ∂y∂z

∂ 2 uy ∂ 2 uy ∂ 2 uz ∂ 2 uz = − − − ∂x∂y ∂x∂y ∂x∂z ∂x∂z

∂ 2 ux ∂ 2 ux + − ∂y∂z ∂y∂z

,

16 82

together with algebraic rearrangement based on commutativity of addition of scalars – where cancellation of each term with its negative leaves merely ∇ ∇×u =0

16 83

Therefore, the divergent of the rotational of a vector function vanishes. Conversely, if v is a vector function such that div v is nil as per Eq. (16.16), then it may be inferred that v is the curl of some vector function u, in agreement with Eqs. (16.26) and (16.83); vector functions whose divergences are identically zero are said to be solenoidal.

679

680

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

16.1.4.5 Calculation of ∇ (ϕ∇ψ)

Another useful relationship pertains to computation of ∇ (ϕ∇ψ), with ϕ and ψ denoting scalar functions (as usual) – which may proceed directly via application of the definition of gradient of a scalar function, as per Eqs. (16.11) and (16.13), according to ∇ ϕ∇ψ = ∇

ϕ ix

∂ ∂ ∂ + iy + i z ψ ; ∂x ∂y ∂z

16 84

application of differential operators to ψ, coupled with commutativity of the multiplication of a scalar by a vector allow transformation to ix ϕ

∇ ϕ∇ψ = ∇

∂ψ ∂ψ ∂ψ + iy ϕ + iz ϕ ∂x ∂y ∂z

16 85

The divergent of the quantity in parenthesis may now be calculated via ∇ ϕ∇ψ = ix

∂ ∂ ∂ + iy + i z ∂x ∂y ∂z

ix ϕ

∂ψ ∂ψ ∂ψ + iy ϕ + iz ϕ ∂x ∂y ∂z

16 86

with the aid of Eq. (16.12), where Eq. (3.74) converts Eq. (16.86) to ∇ ϕ∇ψ = ix

∂ ∂x

ix ϕ

∂ + iz ∂z

∂ψ ∂ψ ∂ψ ∂ + iy + iy ϕ + iz ϕ ∂x ∂y ∂z ∂y

ix ϕ

∂ψ ∂ψ ∂ψ + iy ϕ + iz ϕ ∂x ∂y ∂z

∂ψ ∂ψ ∂ψ ix ϕ + iy ϕ + iz ϕ ∂x ∂y ∂z

;

16 87 each derivative should now apply to every term within the three parentheses, with the unit vectors taken out due to their constancy – according to ∇ ϕ∇ψ = ix ix

∂ ∂ψ ∂ ∂ψ ∂ ∂ψ + ix iy + ix iz ϕ ϕ ϕ ∂x ∂x ∂x ∂y ∂x ∂z

+ iy ix

∂ ∂ψ ∂ ∂ψ ∂ ∂ψ ϕ ϕ ϕ + iy iy + iy iz ∂y ∂x ∂y ∂y ∂y ∂z

+ iz ix

∂ ∂ψ ∂ ∂ψ ∂ ∂ψ ϕ ϕ ϕ + iz iy + iz iz ∂z ∂x ∂z ∂y ∂z ∂z

16 88

The rule of differentiation of a product as per Eq. (10.119) may now be applied to the first, fifth, and ninth parentheses, according to ∇ ϕ∇ψ = ix ix

∂ϕ ∂ψ ∂2 ψ ∂ ∂ψ ∂ ∂ψ + ϕ 2 + ix iy ϕ ϕ + ix iz ∂x ∂x ∂x ∂x ∂y ∂x ∂z

+ iy ix

∂ ∂ψ ∂ϕ ∂ψ ∂2 ψ ∂ ∂ψ , ϕ + ϕ 2 + iy iz ϕ + iy iy ∂y ∂x ∂y ∂y ∂y ∂y ∂z

+ iz ix

∂ ∂ψ ∂ ∂ψ ∂ϕ ∂ψ ∂2 ψ ϕ ϕ +ϕ 2 + iz iy + iz iz ∂z ∂x ∂z ∂y ∂z ∂z ∂z 16 89

Vector Calculus

which may be algebraically rearranged to ∇ ϕ∇ψ = ix

∂ϕ ∂ψ ∂2 ψ ∂ ∂ψ ∂ ∂ψ + ix ix ϕ 2 + ix iy ϕ ϕ ix + i x iz ∂x ∂x ∂x ∂x ∂y ∂x ∂z

+ iy i x

∂ ∂ψ ∂ϕ ∂ψ ∂2 ψ ∂ ∂ψ ϕ + iy iy ϕ 2 + i y iz ϕ + iy iy ∂y ∂x ∂y ∂y ∂y ∂y ∂z

+ iz ix

∂ ∂ψ ∂ ∂ψ ∂ϕ ∂ψ ∂2 ψ ϕ ϕ + iz iz ϕ 2 + iz iy + iz iz ∂z ∂x ∂z ∂y ∂z ∂z ∂z 16 90

in view of Eqs. (3.33) and (3.51). The collinearity of every direction vector with itself implies that their scalar product is unity, see Eq. (3.93) – which, complemented with factoring out of ϕ, yield ∇ ϕ∇ψ = ϕ

∂2 ψ ∂2 ψ ∂2 ψ ∂ϕ ∂ψ ∂ ∂ψ + ix iy ϕ + 2 + 2 + ix ix 2 ∂x ∂y ∂z ∂x ∂x ∂x ∂y

+ ix iz

∂ ∂ψ ∂ ∂ψ ∂ϕ ∂ψ ϕ ϕ + iy ix + iy iy ∂x ∂z ∂y ∂x ∂y ∂y

+ iy iz

∂ ∂ψ ∂ ∂ψ ∂ ∂ψ ∂ϕ ∂ψ ϕ ϕ ϕ + iz ix + iz iy + iz iz ∂y ∂z ∂z ∂x ∂z ∂y ∂z ∂z 16 91

from Eq. (16.90); calculation of the derivatives of the products left, inasmuch the same manner followed previously with ix . ix ∂(ϕ ∂ψ/∂x)/∂x, iy . iy ∂(ϕ ∂ψ/∂y)/∂y, and iz . iz ∂(ϕ ∂ψ/∂z)/∂z, complemented with insertion of Eq. (16.64) support transformation of Eq. (16.91) to ∇ ϕ∇ψ = ϕ∇2 ψ + ix + ix

∂ϕ ∂ψ ∂ϕ ∂ψ ∂2 ψ + ix + ix iy ϕ 2 ix iy ∂x ∂x ∂x ∂y ∂y

∂ϕ ∂ψ ∂2 ψ ∂ϕ ∂ψ ∂2 ψ + ix iz ϕ + iy + iy ix ϕ iz ix ∂x ∂z ∂x∂z ∂y ∂x ∂y∂x

∂ϕ ∂ψ ∂ϕ ∂ψ ∂2 ψ ∂ϕ ∂ψ + iy + iy + iy iz ϕ + iz iy iz ix ∂y ∂y ∂y ∂z ∂y∂z ∂z ∂x + i z ix ϕ

16 92

∂2ψ ∂ϕ ∂ψ ∂2 ψ ∂ϕ ∂ψ + iz + iz iy ϕ + iz iy iz ∂z∂x ∂z ∂y ∂z∂y ∂z ∂z

Upon factoring out of ix ∂ϕ/∂x, iy ∂ϕ/∂y, and iz ∂ϕ/∂z based on the distributive property of scalar product, Eq. (16.92) becomes ∇ ϕ∇ψ = ϕ∇2 ψ + ix iy ϕ

∂2 ψ ∂2 ψ ∂2 ψ ∂2 ψ + i + i + i i ϕ i ϕ i ϕ x z y x y z ∂y2 ∂x∂z ∂y∂x ∂y∂z

∂2ψ ∂2 ψ ∂ϕ ∂ψ ∂ψ ∂ψ ; + iz i y ϕ + ix + iy + iz ix ∂z∂x ∂z∂y ∂x ∂x ∂y ∂z ∂ϕ ∂ψ ∂ψ ∂ψ ∂ϕ ∂ψ ∂ψ ∂ψ + iy + iz + iy + iz ix + iz ix + iy ∂y ∂x ∂y ∂z ∂z ∂x ∂y ∂z + iz ix ϕ

16 93

681

682

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in view of Eq. (3.94), one may simplify Eq. (16.93) to ∇ ϕ∇ψ = ϕ∇2 ψ + ix

∂ϕ ∂ϕ ∂ϕ ∇ψ + iy ∇ψ + iz ∇ψ ∂x ∂y ∂z

16 94

– where insertion of Eq. (16.12) meanwhile took place. Once ∇ψ has been factored out, Eq. (16.94) turns to ∇ ϕ∇ψ = ϕ∇2 ψ + ix

∂ϕ ∂ϕ ∂ϕ + iy + iz ∂x ∂y ∂z

∇ψ

16 95

as allowed by Eq. (3.74) – where Eqs. (16.10) and (16.13), togther with the commutative property of addition of scalars support condensation to ∇ ϕ∇ψ = ∇ϕ ∇ψ + ϕ∇2 ψ

16 96

– once again exhibiting a form consistent with that of Eq. (10.119). 16.1.4.6 Calculation of ∇ (uu)

The divergent of the product of two vectors, expressed as ∇ uu = ∇

ix ux + iy uy + iz uz ix ux + iy uy + iz uz

16 97

on the basis of Eq. (16.1), may be redone via the distributive property of multiplication of vectors as per Eq. (4.76) – eventually give ∇ uu = ∇

ix ux ix ux + iy uy + iz uz + iy uy ix ux + iy uy + iz uz + iz uz ix ux + iy uy + iz uz

;

16 98

similar application of Eq. (4.82) gives rise to φxx ux ux + φxy ux uy + φxz ux uz

ix ix ux ux + ix iy ux uy + ix iz ux uz ∇ uu = ∇

+ iy ix uy ux + iy iy uy uy + iy iz uy uz

=∇

+ iz ix uz ux + iz iy uz uy + iz iz uz uz

+ φyx uy ux + φyy uy uy + φyz uy uz + φzx uz ux + φzy uz uy + φzz uz uz 16 99

with the aid of Eqs. (5.15)–(5.23). Recalling the definition of del as per Eq. (16.3), one gets ∂ ∂ ∂ ∇ uu = ix + iy + iz ∂x ∂y ∂z

φxx ux ux + φxy ux uy + φxz ux uz + φyx uy ux + φyy uy uy + φyz uy uz

16 100

+ φzx uz ux + φzy uz uy + φzz uz uz

from Eq. (16.99); given the analogy of Eq. (16.100) to Eq. (16.20), one may resort to Eq. (16.17) as template to write ∇ uu = ix

∂ ∂ ∂ ux ux + uy ux + uz ux ∂x ∂y ∂z

+ iy

∂ ∂ ∂ ux uy + uy uy + uz uy ∂x ∂y ∂z

+ iz

∂ ∂ ∂ ux uz + uy uz + uz uz ∂x ∂y ∂z

16 101

Vector Calculus

Straight application of the rule of differentiation of product and a power – labeled as Eqs. (10.119) and (10.126), respectively, transforms Eq. (16.101) to ∇ uu = ix 2ux

∂uy ∂ux ∂ux ∂uz ∂ux + ux + uy + ux + uz ∂x ∂y ∂y ∂z ∂z

+ iy u y

∂uy ∂uy ∂uy ∂ux ∂uz + ux + 2uy + uy + uz , ∂x ∂x ∂y ∂z ∂z

+ iz u z

∂uy ∂ux ∂uz ∂uz ∂uz + ux + uz + uy + 2uz ∂x ∂x ∂y ∂y ∂z

16 102

which may be redone to ∇ uu = ix ux

∂uy ∂ux ∂uz ∂ux ∂ux ∂ux + ux + ux + ux + uy + uz ∂x ∂y ∂z ∂x ∂y ∂z

+ iy u y

∂uy ∂uy ∂uy ∂uy ∂ux ∂uz + uy + uy + ux + uy + uz ∂x ∂y ∂z ∂x ∂y ∂z

+ iz u z

∂uy ∂ux ∂uz ∂uz ∂uz ∂uz + uz + uz + ux + uy + uz ∂x ∂y ∂z ∂x ∂y ∂z

16 103

upon splitting of first, eighth and fifteenth terms, and exchanging places afterward as deemed convenient; ux may then be factored out in the first parenthesis – and uy likewise factored out in the second, and uz in the third to produce ∇ uu = ix ux

∂ux ∂uy ∂uz ∂ux ∂ux ∂ux + ux + + + uy + uz ∂x ∂y ∂z ∂x ∂y ∂z

+ iy uy

∂uy ∂uy ∂uy ∂ux ∂uy ∂uz + ux + + + uy + uz ∂x ∂y ∂z ∂x ∂y ∂z

+ iz uz

∂ux ∂uy ∂uz ∂uz ∂uz ∂uz + ux + + + uy + uz ∂x ∂y ∂z ∂x ∂y ∂z

16 104

In view of Eqs. (16.14) and (16.16), it is possible to transform Eq. (16.104) to ∇ uu = ix ux ∇ u + ux

∂ux ∂ux ∂ux + uy + uz ∂x ∂y ∂z

+ iy uy ∇ u + ux

∂uy ∂uy ∂uy + uy + uz , ∂x ∂y ∂z

+ iz uz ∇ u + ux

∂uz ∂uz ∂uz + uy + uz ∂x ∂y ∂z

16 105

where ∇ u may be factored out, and ix, iy, and iz factored in as ∇ uu = ix ux + iy uy + iz uz ∇ u + ix ux

∂ux ∂ux ∂ux + ix uy + ix uz ∂x ∂y ∂z

∂uy ∂uy ∂uy ∂uz ∂uz ∂uz + iy uy + iy u z + iz ux + iz u y + iz uz + iy ux ∂x ∂y ∂z ∂x ∂y ∂z

16 106

683

684

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

owing to Eqs. (3.44) and (3.51), respectively; Eq. (16.1) then allows condensation to ∇ uu = u ∇ u + ix ux

∂uy ∂ux ∂uz + iy u x + iz u x ∂x ∂x ∂x

+ ix u y

∂uy ∂ux ∂uz + iy u y + iz uy ∂y ∂y ∂y

+ ix uz

∂uy ∂ux ∂uz + iy u z + iz uz ∂z ∂z ∂z

,

16 107

where algebraic reorganization meanwhile took place. If ux ∂/∂x is factored out of the first parenthesis, as well as uy ∂/∂y of the second parenthesis and uz ∂/∂z of the third one, then Eq. (16.107) becomes ∇ uu = u ∇ u + ux

∂ ∂ ix u x + i y u y + i z u z + u y ix u x + iy u y + iz u z ∂x ∂y

∂ ix ux + iy uy + iz uz + uz ∂z

16 108

– as allowed by Eq. (10.106), whereas a further factoring out of ixux + iyuy + izuz permits condensation to ∇ uu = u ∇ u + ux

∂ ∂ ∂ + uy + uz ∂x ∂y ∂z

ix ux + iy uy + iz uz ;

16 109

insertion of Eq. (16.1) finally yields ∇ uu = u ∇ u + u ∇ u,

16 110

at the expense also of Eqs. (3.95) and (16.12) – which somehow resembles the rule of differentiation of a plain product of functions, see Eq. (10.119).

16.1.4.7 Calculation of ∇ × (∇ ϕ)

Consider now the curl, ∇×, of the gradient of a scalar function ∇ϕ; based on Eqs. (16.10), (16.13), (16.24), and (16.26), one will get ix

iy

iz

∂ ∂ ∂ ∇ × ∇ϕ = ∂x ∂y ∂z ∂ϕ ∂ϕ ∂ϕ ∂x ∂y ∂z

16 111

Vector Calculus

Laplace’s expansion of the third-order determinant in Eq. (16.111), via its first row as per Eq. (6.41), unfolds ∂ ∂y ∇ × ∇ϕ = ix ∂ϕ ∂y

∂ ∂z − iy ∂ϕ ∂z

∂ ∂x ∂ϕ ∂x

∂ ∂z + iz ∂ϕ ∂z

∂ ∂x ∂ϕ ∂x

∂ ∂y , ∂ϕ ∂y

16 112

whereas calculation of the germane second-order determinants after Eq. (1.10) gives rise to ∇ × ∇ϕ = ix

∂2 ϕ ∂2 ϕ ∂2 ϕ ∂2 ϕ ∂2 ϕ ∂2 ϕ − − − − iy + iz ; ∂y∂z ∂z∂y ∂x∂z ∂z∂x ∂x∂y ∂y∂x

16 113

owing to Young’s (or Schwarz’s) theorem as per Eq. (10.65), one may change the order of integration of the second term in each parenthesis as ∇ × ∇ϕ = ix

∂2 ϕ ∂2 ϕ ∂2 ϕ ∂2 ϕ ∂2 ϕ ∂2 ϕ − − − − iy + iz ∂y∂z ∂y∂z ∂x∂z ∂x∂z ∂x∂y ∂x∂y

16 114

– so one promptly obtains ∇ × ∇ϕ = 0

16 115

upon cancellation of symmetrical terms. Hence, one realizes that the gradient of a scalar function is irrotational; this result complements (but does not coincide with) that labeled as Eq. (16.83). 16.1.4.8 Calculation of ∇(∇ u)

Consider now the gradient of a scalar function that is, in turn, obtained as the divergent of an original vector function, viz. ∇ ∇ u =∇

ix

∂ ∂ ∂ + i y + iz ∂x ∂y ∂z

ix ux + iy uy + iz uz

16 116

– where Eqs. (16.1) and (16.12) were taken advantage of; recalling Eq. (3.95), one may write ∇ ∇ u =∇

∂ ∂ ∂ ux + uy + uz ∂x ∂y ∂z



∂ux ∂uy ∂uz + + ∂x ∂y ∂z

16 117

Multiplication of a vector by a scalar in Eq. (16.117) may now be handled as ∇ ∇ u = ix

∂ ∂ ∂ + iy + i z ∂x ∂y ∂z

∂ux ∂uy ∂uz , + + ∂x ∂y ∂z

16 118

685

686

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

resorting again to Eq. (16.12) – where the distributive property conveyed by Eq. (3.44), followed by application of Eq. (10.107) unfold ∇ ∇ u = ix

∂ ∂ux ∂uy ∂uz ∂ ∂ux ∂uy ∂uz + iy + + + + ∂x ∂x ∂y ∂x ∂y ∂z ∂y ∂z

+ iz

= ix

∂ ∂ux ∂uy ∂uz + + ∂z ∂x ∂y ∂z ∂ ∂ux ∂ ∂uy ∂ ∂uz + + ∂x ∂x ∂x ∂y ∂x ∂z

+ iy

∂ ∂ux ∂ ∂uy ∂ ∂uz + + ∂y ∂x ∂y ∂y ∂y ∂z

+ iz

∂ ∂ux ∂ ∂uy ∂ ∂uz + + ∂z ∂x ∂z ∂y ∂z ∂z

;

16 119

recalling the definition of second-order derivative, as well as Eq. (10.65), one is able to redo Eq. (16.119) to ∇ ∇ u = ix

∂ 2 ux ∂ 2 uy ∂ 2 uz ∂ 2 ux ∂ 2 uy ∂ 2 uz ∂ 2 ux ∂ 2 uy ∂ 2 uz + i + i + + + + + + y z ∂x2 ∂x∂y ∂x∂z ∂y∂x ∂y2 ∂y∂z ∂z∂x ∂z∂y ∂z2

= ix

∂ 2 ux ∂ 2 uy ∂ 2 uz ∂ 2 ux ∂ 2 uy ∂ 2 uz ∂ 2 ux ∂ 2 uy ∂ 2 uz + i + i + + + + + + y z ∂x2 ∂x∂y ∂x∂z ∂x∂y ∂y2 ∂y∂z ∂x∂z ∂y∂z ∂z2

16 120 Note the difference between Eq. (16.120) pertaining to ∇(∇ u) and Eq. (16.66) pertaining to (∇ ∇)u, thus proving that the dyadic and scalar products do not commute with each other. The aforementioned issue may be taken one step further by first isolating univariate second-order derivatives in Eq. (16.120) as ∇ ∇ u = ix

∂ 2 uy ∂ 2 ux ∂ 2 uz + iy 2 + iz 2 2 ∂x ∂y ∂z

+ ix

∂ 2 uy ∂ 2 uz ∂ 2 ux ∂ 2 uz ∂ 2 ux ∂ 2 uy + iy + iz + + + ∂x∂y ∂x∂z ∂x∂y ∂y∂z ∂x∂z ∂y∂z

,

16 121

where addition and subtraction of ∂ 2ux/∂y2 and ∂ 2ux/∂z2 in the first parenthesis, ∂ 2uy/∂x2 and ∂ 2uy/∂z2 in the second parenthesis, and ∂ 2uz/∂x2 and ∂ 2uz/∂y2 in the third parenthesis give rise to

Vector Calculus

∇ ∇ u = ix

∂ 2 uy ∂ 2 ux ∂ 2 uz + i + i y z ∂x2 ∂y2 ∂z2

+ ix

∂ 2 uy ∂ 2 uz ∂ 2 ux ∂ 2 ux ∂ 2 ux ∂ 2 ux + + − 2 + 2 − 2 ∂x∂y ∂x∂z ∂y2 ∂y ∂z ∂z

+ iy

∂ 2 ux ∂ 2 uz ∂ 2 uy ∂ 2 uy ∂ 2 uy ∂ 2 uy + + − + 2 − 2 ∂x∂y ∂y∂z ∂x2 ∂x2 ∂z ∂z

+ iz

∂ 2 ux ∂ 2 uy ∂ 2 uz ∂ 2 uz ∂ 2 uz ∂ 2 uz + + − + 2 − 2 ∂x∂z ∂y∂z ∂x2 ∂x2 ∂y ∂y

;

16 122

after regrouping terms within the various parentheses and recalling the concept of differential operator, Eq. (16.122) will appear as ∇ ∇ u = ix

∂ 2 uy ∂ 2 ux ∂ 2 uz + i + i y z ∂x2 ∂y2 ∂z2

+ ix

∂ ∂uy ∂ ∂ux ∂ ∂ux ∂ ∂uz ∂ 2 ux ∂ 2 ux − − + + 2 + 2 ∂y ∂x ∂y ∂y ∂z ∂z ∂z ∂x ∂y ∂z

+ iy

∂ 2 uy ∂ 2 uy ∂ ∂ux ∂ ∂uy ∂ ∂uy ∂ ∂uz − − + + 2 + 2 ∂x ∂y ∂x ∂x ∂z ∂z ∂z ∂y ∂x ∂z

+ iz

∂ ∂ux ∂ ∂uz ∂ ∂uz ∂ ∂uy ∂ 2 uz ∂ 2 uz − − + + 2 + 2 ∂x ∂z ∂x ∂x ∂y ∂y ∂y ∂z ∂x ∂y

,

16 123 again with the aid of Eq. (10.65). Splitting of every parenthesis in Eq. (16.123) is now in order, according to ∇ ∇ u = ix

∂ 2 uy ∂ 2 ux ∂ 2 uz + i + i y z ∂x2 ∂y2 ∂z2

+ ix

∂ 2 ux ∂ 2 ux ∂ ∂uy ∂ ∂ux ∂ ∂ux ∂ ∂uz + 2 + ix − − + ∂y ∂x ∂y ∂y ∂z ∂z ∂z ∂x ∂y2 ∂z

+ iy

∂ 2 uy ∂ 2 uy + 2 ∂x2 ∂z

+ iz

∂ 2 uz ∂ 2 uz ∂ ∂ux ∂ ∂uz ∂ ∂uz ∂ ∂uy − − + + 2 + iz ∂x ∂z ∂x ∂x ∂y ∂y ∂y ∂z ∂x2 ∂y

+ iy

∂ ∂ux ∂ ∂uy ∂ ∂uy ∂ ∂uz − − + ∂x ∂y ∂x ∂x ∂z ∂z ∂z ∂y

16 124

,

687

688

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where ix, iy, and iz may be factored out from the terms involving only ∂ 2ux/∂x2, ∂ 2uy/∂y2, and ∂ 2uz/∂z2 as ∂ 2 uy ∂ 2 uy ∂ 2 uy ∂ 2 ux ∂ 2 ux ∂ 2 ux + i + + + 2 + 2 y ∂x2 ∂y2 ∂z2 ∂x2 ∂y ∂z

∇ ∇ u = ix + iz

∂ 2 uz ∂ 2 uz ∂ 2 uz + 2 + 2 ∂x2 ∂y ∂z

+ ix

∂ ∂uy ∂ ∂ux ∂ ∂ux ∂ ∂uz − − + ∂y ∂x ∂y ∂y ∂z ∂z ∂z ∂x

+ iy

∂ ∂ux ∂ ∂uy ∂ ∂uy ∂ ∂uz − − + ∂x ∂y ∂x ∂x ∂z ∂z ∂z ∂y

+ iz

∂ ∂ux ∂ ∂uz ∂ ∂uz ∂ ∂uy − − + ∂x ∂z ∂x ∂x ∂y ∂y ∂y ∂z

;

16 125

in view of Eq. (16.64), one can reduce Eq. (16.125) to ∇ ∇ u = ix ∇ 2 u x + iy ∇ 2 u y + i z ∇ 2 u z + ix

∂ ∂uy ∂ ∂ux ∂ ∂ux ∂ ∂uz − − + ∂y ∂x ∂y ∂y ∂z ∂z ∂z ∂x

+ iy

∂ ∂ux ∂ ∂uy ∂ ∂uy ∂ ∂uz − − + ∂x ∂y ∂x ∂x ∂z ∂z ∂z ∂y

+ iz

∂ ∂ux ∂ ∂uz ∂ ∂uz ∂ ∂uy − − + ∂x ∂z ∂x ∂x ∂y ∂y ∂y ∂z

16 126

Insertion of Eq. (16.66) permits further simplification of Eq. (16.126) to ∇ ∇ u = ∇ 2 u + ix

∂ ∂uy ∂ux ∂ ∂ux ∂uz − − − ∂y ∂x ∂y ∂z ∂z ∂x

− iy

∂ ∂uy ∂uz ∂ ∂ux ∂uy − − − ∂z ∂z ∂y ∂x ∂y ∂x

+ iz

∂ ∂ux ∂uz ∂ ∂uz ∂uy − − − ∂x ∂z ∂x ∂y ∂y ∂z

= ∇ u + ix 2

∂ ∂uy ∂ux ∂ ∂ux ∂uz − − − ∂y ∂x ∂y ∂z ∂z ∂x

− iy

∂ ∂uy ∂ux ∂ ∂uz ∂uy − − − ∂x ∂x ∂y ∂z ∂y ∂z

+ iz

∂ ∂ux ∂uz ∂ ∂uz ∂uy − − − ∂x ∂z ∂x ∂y ∂y ∂z

,

16 127

Vector Calculus

where Eqs. (10.106) and (16.63) were taken into account, together with commutativity; based on the definition of second-order determinant, Eq. (16.127) may alternatively be coined as

∇ ∇ u = ∇2 u + ix

+ iz

∂ ∂y

∂ ∂z

∂ux ∂uz ∂uy ∂ux − − ∂z ∂x ∂x ∂y

∂ ∂x

− iy

∂ ∂x

∂ ∂z

∂uz ∂uy ∂uy ∂ux − − ∂y ∂z ∂x ∂y

∂ ∂y

,

16 128

∂uz ∂uy ∂ux ∂uz − − ∂y ∂z ∂z ∂x

which may in turn be formulated via a third-order determinant following Eq. (6.5), i.e. iy

ix ∇ ∇ u = ∇2 u +

iz

∂ ∂ ∂ ∂x ∂y ∂z ∂uz ∂uy ∂ux ∂uz ∂uy ∂ux − − − ∂y ∂z ∂z ∂x ∂x ∂y

16 129

In view of Eqs. (16.24) and (16.25), one may reformulate Eq. (16.129) to read ∇ ∇ u = ∇2 u + ix × ix

∂ ∂ ∂ + i y + iz ∂x ∂y ∂z

∂uy ∂ux ∂uz ∂uy ∂ux ∂uz + iy + iz − − − ∂y ∂z ∂z ∂x ∂x ∂y

,

16 130

whereas Eq. (16.12) permits notation be simplified to ∇ ∇ u = ∇ 2 u + ∇ × ix

∂uy ∂ux ∂uz ∂uy ∂ux ∂uz + iy + iz − − − ∂y ∂z ∂z ∂x ∂x ∂y

; 16 131

the content of the parenthesis in the right-hand side may, in turn, be rewritten as ∇ ∇ u = ∇2 u + ∇ ×

∂ ∂ ∂ ∂ ∂ ∂ ix ∂y ∂z − iy ∂x ∂z + iz ∂x ∂y ux uz uy uz ux uy

16 132

due to the definition of second-order determinant, or else ix iy iz ∇ ∇ u = ∇2 u + ∇ ×

∂ ∂ ∂ ∂x ∂y ∂z ux uy uz

16 133

689

690

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

following application (in reverse) of Laplace’s expansion of a third-order determinant along its first row. One may again retrieve Eqs. (16.24) and (16.25) to get ∇ ∇ u = ∇2 u + ∇ ×

ix

∂ ∂ ∂ + i y + iz × ix ux + iy uy + iz uz ∂x ∂y ∂z

16 134

from Eq. (16.133) or, equivalently, ∇ ∇ u = ∇2 u + ∇ × ∇ × u

16 135

at the expense of Eqs. (16.1) and (16.12); therefore, ∇ × ∇ × u is to be added to ∇ ∇ u ∇2 u to account for ∇ ∇ u , originally under scrutiny. 16.1.4.9 Calculation of (u ∇)u

Consider now the dyadic product of u ∇ by u, according to u ∇ u=

ix u x + iy u y + iz u z

ix

∂ ∂ ∂ + i y + iz ∂x ∂y ∂z

u

16 136

with the aid of Eqs. (16.1) and (16.12); in view of Eq. (3.95), one may redo Eq. (16.136) to u ∇ u = ux

∂ ∂ ∂ + uy + uz u ∂x ∂y ∂z

16 137

Explicitation of the components of u in Eq. (16.137) using Eq. (16.1) once more, i.e. u ∇ u = ux

∂ ∂ ∂ + uy + uz ∂x ∂y ∂z

ix ux + iy uy + iz uz ,

16 138

leads, upon elimination of the first set of parentheses, to u ∇ u = ux

∂ ∂ ix ux + iy uy + iz uz + uy ix u x + iy u y + iz u z ∂x ∂y

16 139

∂ ix ux + iy uy + iz uz + uz ∂z

as per the distributive property of multiplication of scalar by vector as per Eq. (3.51); elimination of parentheses then generates u ∇ u = ix ux

∂uy ∂uy ∂ux ∂uz ∂ux ∂uz + iy u x + iz u x + ix uy + iy uy + iz u y ∂x ∂x ∂x ∂y ∂y ∂y

∂uy ∂ux ∂uz + iy uz + i z uz + ix u z ∂z ∂z ∂z

,

16 140

based on Eq. (10.120) and owing to the constancy of ix, iy, and iz. Unit vectors may now be factored out as appropriate, and complemented with further algebraic rearrangement to get u ∇ u = ix u x + iz

∂uy ∂uy ∂uy ∂ux ∂ux ∂ux + iy ux + uy + uz + uy + uz ∂x ∂y ∂z ∂x ∂y ∂z

∂uz ∂uz ∂uz + uy + uz ux ∂x ∂y ∂z

16 141

Vector Calculus

– whereas addition and subtraction of uy ∂uy/∂x and uz ∂uz/∂x to the first parenthesis, ux ∂ux/∂y and uz ∂uz/∂y to the second parenthesis, and ux ∂ux/∂z and uy ∂uy/∂z to the third parenthesis leads to u ∇ u = ix ux

∂uy ∂uy ∂ux ∂ux ∂uz ∂uz ∂ux + uz + uz + uy + uy − uy − uz ∂x ∂y ∂x ∂x ∂x ∂x ∂z

+ iy ux

∂uy ∂uy ∂uy ∂ux ∂ux ∂uz ∂uz + uy + uz + ux − ux + uz − uz ∂x ∂y ∂y ∂y ∂y ∂y ∂z

+ iz ux

∂uy ∂uy ∂uz ∂ux ∂ux ∂uz ∂uz + uy + uy + ux − ux − uy + uz ∂x ∂z ∂z ∂z ∂z ∂y ∂z

;

16 142 upon factoring out uy and uz in the first outer parenthesis, ux and uz in the second outer parenthesis, and ux and uy in the third one, Eq. (16.142) becomes u ∇ u = ix ux

∂uy ∂ux ∂ux ∂uy ∂ux ∂uz ∂uz + uy + uz + uy − + uz − ∂x ∂y ∂x ∂x ∂z ∂x ∂x

+ iy ux

∂uy ∂ux ∂uy ∂uy ∂uz ∂ux ∂uz − + uy + uz − + ux + uz ∂x ∂y ∂y ∂y ∂z ∂y ∂y

+ iz ux

∂uy ∂uz ∂ux ∂ux ∂uz ∂uy ∂uz + ux + uy − + uy − + uz ∂x ∂z ∂z ∂y ∂z ∂z ∂z 16 143

Each outer parenthesis in Eq. (16.143) may now undergo splitting, followed by algebraic rearrangement as u ∇ u = ix u x

∂uy ∂uy ∂ux ∂uz ∂ux ∂uz + iy u x + uy + uz + uy + uz ∂x ∂x ∂x ∂y ∂y ∂y

+ iz ux

∂uy ∂uy ∂ux ∂ux ∂uz ∂ux ∂uz − ix uy −uz + uy + uz − − ∂z ∂z ∂z ∂x ∂y ∂z ∂x

+ iy ux

∂uy ∂ux ∂uz ∂uy −uz − − ∂x ∂y ∂y ∂z

− iz u x

∂ux ∂uz ∂uz ∂uy − uy − − ∂z ∂x ∂y ∂z

;

16 144 recalling Eqs. (1.10) and (10.126), one may redo Eq. (16.144) as 1 ∂ux 2 1 ∂uy 1 ∂uz 2 1 ∂ux 2 1 ∂uy 1 ∂uz 2 + iy + + + + 2 ∂x 2 ∂x 2 ∂x 2 ∂y 2 ∂y 2 ∂y 2

u ∇ u = ix

2

uy uz 2 1 ∂ux 2 1 ∂uy 1 ∂uz 2 −ix ∂ux ∂uz ∂uy ∂ux + + 2 ∂z 2 ∂z 2 ∂z − − ∂z ∂x ∂x ∂y ux uy ux uz + iy ∂uz ∂uy ∂uy ∂ux −iz ∂uz ∂uy ∂ux ∂uz − − − − ∂y ∂z ∂x ∂y ∂y ∂z ∂z ∂x

+ iz

,

16 145

691

692

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where ½ may now be factored out in the first three terms, and ∂/∂x, ∂/∂y, and ∂/∂z factored out in the first, second, and third term, respectively, as well as Laplace’s expansion of a third-order determinant revisited to write 1 ∂ 1 ∂ 1 ∂ u 2 + uy2 + uz2 + iy u 2 + uy2 + uz2 + iz u 2 + uy2 + uz2 u ∇ u = ix 2 ∂x x 2 ∂y x 2 ∂z x



ix

iy

iz

ux

uy

uz

∂uz ∂uy ∂ux ∂uz ∂uy ∂ux − − − ∂y ∂z ∂z ∂x ∂x ∂y

16 146

– with linearity of differential operators meanwhile taken advantage of. Since ux 2 + uy 2 + uz 2 appears under the three differential operators, and the vector product abides to Eqs. (3.132) and (3.145), one may convert Eq. (16.146) to u ∇ u=

1 ∂ ∂ ∂ i x + iy + iz 2 ∂x ∂y ∂z

ux 2 + uy 2 + uz 2

− ix u x + iy u y + iz u z × ix

∂uy ∂ux ∂uz ∂uy ∂uz ∂ux − − − − iy + iz ∂y ∂z ∂x ∂z ∂x ∂y

16 147 with the aid of the distributive property of multiplication of a scalar by a vector as per Eq. (3.51); in view of Eqs. (3.95), (16.1), and (16.12), it is possible to reformulate Eq. (16.147) to 1 u ∇ u = ∇ ux ux + uy uy + uz uz 2 − ix u x + i y u y + i z u z × 1 = ∇ 2

ix ux + iy uy + iz uz

∂ ∂ ∂ ∂ ∂ ∂ ∂y ∂z ∂x ∂y ∂x ∂z ix − iy + iz ux uz uy uz ux uy ix u x + i y u y + i z u z

,

ix i y i z − ix u x + i y u y + i z u z ×

∂ ∂ ∂ ∂x ∂y ∂z ux uy uz 16 148

where the definitions of second- and third-order determinants were duly taken on board; insertion of Eqs. (16.1) and (16.24) finally generates 1 u ∇ u = ∇ u u −u × ∇ × u 2 in condensed notation, together with the definition conveyed by Eq. (16.26).

16 149

Vector Calculus

16.1.4.10

Calculation of ∇ (τ u)

One may finally calculate the divergent of the scalar product of a tensor τ by a vector u, viz. φxx τxx + φxy τxy + φxz τxz + φyx τyx + φyy τyy + φyz τyz

∇ τ u =∇

ix ux + iy uy + iz uz

,

16 150

+ φzx τzx + φzy τzy + φzz τzz in view of Eqs. (5.2) and (16.1); recalling Eqs. (5.33) and (16.12), one may reconstruct Eq. (16.150) as ∂ ∂ ∂ ∇ τ u = ix + i y + iz ∂x ∂y ∂z

ix τxx ux + τxy uy + τxz uz + iy τyx ux + τyy uy + τyz uz

16 151

+ iz τzx ux + τzy uy + τzz uz

Recalling the property of the scalar product labeled as Eq. (3.95), it turns possible to redo Eq. (16.151) as ∇ τ u =

∂ ∂ τxx ux + τxy uy + τxz uz + τyx ux + τyy uy + τyz uz ∂x ∂y ∂ τzx ux + τzy uy + τzz uz , + ∂z

16 152

whereas Eq. (10.107) supports transformation to ∇ τ u =

∂ ∂ ∂ τxx ux + τxy uy + τxz uz ∂x ∂x ∂x +

∂ ∂ ∂ τyx ux + τyy uy + τyz uz ∂y ∂y ∂y

+

∂ ∂ ∂ τzx ux + τzy uy + τzz uz ∂z ∂z ∂z

16 153

The rule of differentiation of a product, see Eq. (10.119), justifies conversion of Eq. (16.153) to ∇ τ u = τxx

∂uy ∂τxy ∂ux ∂τxx ∂uz ∂τxz + ux + τxy + uy + τxz + uz ∂x ∂x ∂x ∂x ∂x ∂x

+ τyx

∂τyx ∂uy ∂τyy ∂τyz ∂ux ∂uz + ux + τyy + uy + τyz + uz ; ∂y ∂y ∂y ∂y ∂y ∂y

+ τzx

∂uy ∂τzy ∂ux ∂τzx ∂uz ∂τzz + ux + τzy + uy + τzz + uz ∂z ∂z ∂z ∂z ∂z ∂z

16 154

693

694

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

once ux, uy, and uz (as appropriate) have been factored out, Eq. (16.154) degenerates to ∇ τ u = ux

∂τxy ∂τyy ∂τzy ∂τxx ∂τyx ∂τzx + uy + + + + ∂x ∂y ∂z ∂x ∂y ∂z

+ uz

∂uy ∂τxz ∂τyz ∂τzz ∂ux ∂uz + τxx + + + τxy + τxz ∂x ∂y ∂z ∂x ∂x ∂x

+ τyx

∂uy ∂uy ∂ux ∂uz ∂ux ∂uz + τyy + τyz + τzx + τzy + τzz ∂y ∂y ∂y ∂z ∂z ∂z

16 155

Inspection of Eq. (16.155) vis-à-vis with Eq. (5.36) – with the further aid of Eqs. (5.13), (16.1), and (16.12), permit notation be simplified at the expense of the double dot product of two tensors, i.e. ∂τxy ∂τyy ∂τzy ∂τxx ∂τyx ∂τzx + uy + + + + ∂x ∂y ∂z ∂x ∂y ∂z

∇ τ u = ux

∂τxz ∂τyz ∂τzz ∂ ∂ ∂ ux + τxy uy + τxz uz + τxx + + ∂x ∂x ∂x ∂x ∂y ∂z

+ uz + τyx = ux +τ

∂ ∂ ∂ ∂ ∂ ∂ ux + τyy uy + τyz uz + τzx ux + τzy uy + τzz uz ∂y ∂y ∂y ∂z ∂z ∂z

,

∂τxy ∂τyy ∂τzy ∂τxx ∂τyx ∂τzx ∂τxz ∂τyz ∂τzz + uy + uz + + + + + + ∂x ∂y ∂z ∂x ∂y ∂z ∂x ∂y ∂z ∇u 16 156

where analogy between the components of uu in Eq. (16.100) after ux, uy, and uz placed first have been replaced by ∂/∂x, ∂/∂y, and ∂/∂z, respectively, has meanwhile been used to advantage. Furthermore, the first terms in Eq. (16.156) may be recoined as ∇ τ u = i x u x ix + iz uz iz

∂τxy ∂τyy ∂τzy ∂τxx ∂τyx ∂τzx + iy uy iy + + + + ∂x ∂y ∂z ∂x ∂y ∂z ∂τxz ∂τyz ∂τzy +τ + + ∂x ∂y ∂z

,

16 157

∇u

in view of the property conveyed by Eqs. (3.93) and (3.95) – while extra use of Eq. (3.94) and the distributive property support transformation to ix ∇ τ u = ix ux + iy uy + iz uz

∂τxx ∂τyx ∂τzx + + ∂x ∂y ∂z

+ iy

∂τxy ∂τyy ∂τzy + + ∂x ∂y ∂z

+ iz

∂τxz ∂τyz ∂τzy + + ∂x ∂y ∂z



∇u ;

16 158

Vector Calculus

one finally obtains ∇ τ u =u ∇ τ +τ

∇u ,

16 159

at the expense of Eqs. (16.1), (16.16), and (16.17).

16.2 Cylindrical Coordinates 16.2.1

Definition and Representation

It is often convenient – especially when setting partial differential equations on ϕ or u that apply to problems bearing cylindrical symmetry, to express grad or lap of ϕ, or else div or curl of u in cylindrical polar coordinates, rather than Cartesian rectangular coordinates. To do so, the position of a point P characterized by rectangular coordinates (x,y,z) should alternatively be determined by coordinates (x,r,θ) – where r denotes distance of its projection P∗ onto the y0z plane to the origin of the axes, and θ denotes angle formed, with the (positive) y-axis, by straight segment [0P∗], as illustrated in Fig. 16.2. Note that the unit vectors jx, jr, and jθ are applied on point P itself, instead of the (fixed) origin of the Cartesian axes – as happened with unit vectors ix, iy, and iz in Fig. 16.1. Furthermore, jx coincides in direction with ix, whereas jr has the direction of the straight line defined by points 0 and P∗, and jθ has the direction tangent to the circumference centered at 0, laid out on plane y0z and described by radius r. Recalling Eqs. (2.288) and (2.290), one may relate rectangular coordinates y and z to cylindrical coordinates r and θ as y

r cosθ

16 160

z

r sin θ;

16 161

and

after squaring both sides, Eqs. (16.160) and (16.161) become x

jθ P (x,r,θ)

Figure 16.2 Graphical representation of a general point P, via cylindrical coordinates (x,r,θ) in the three-dimensional space, and its projection P∗ on the y0z plane – with indication of the canonical base of unit, mutually orthogonal vectors for said system of coordinates, i.e. (jx, jr, jθ).

x

jx jr

z

r θ 0

P* y

695

696

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

y2 = r 2 cos2 θ

16 162

z2 = r 2 sin2 θ,

16 163

and

respectively. Ordered addition of Eqs. (16.162) and (16.163) gives rise to y2 + z2 = r 2 cos2 θ + r 2 sin2 θ = r 2 cos2 θ + sin2 θ ,

16 164

where r2 was meanwhile factored out; insertion of Eq. (2.442) then allows simplification to y2 + z2 = r 2 ,

16 165

or else y2 + z2

r

16 166

after taking square roots of both sides. Upon ordered division of Eq. (16.161) by Eq. (16.160), one gets z r sin θ sin θ = = ; y r cos θ cosθ

16 167

Eq. (2.299) can be invoked to write z = tan θ y

16 168

or, equivalently, θ

tan− 1

z y

16 169

after taking inverse tangent of both sides. The reference unit vectors jx, jr, and jθ are mutually orthogonal – as was already the case of ix, iy, and iz; however, the latter are fixed in space even when point P moves, unlike vectors jr and jθ that change direction relative to the Cartesian axes as P moves. One may resort to trigonometric arguments in an effort to express such unit vectors as functions of ix, iy, and iz, i.e. jx

ix

16 170

as trivial realization, coupled with jr

iy cosθ + iz sin θ

16 171

as direct result of Eqs. (16.160) and (16.161), complemented with Figs. 16.1 and 16.2; the remaining vector is given by jθ

− iy sin θ + iz cos θ,

16 172

which satisfies the condition of orthogonality as set forth by Eq. (3.56) – i.e. jr jθ = cos θ − sin θ + sinθ cos θ = 0,

16 173

Vector Calculus

in agreement with Eq. (3.95). Note that Eq. (16.170) implies j x = ix = 1

16 174

by hypothesis, whereas Eq. (2.442) allows one to write jr =

cos2 θ + sin2 θ = 1

16 175

– upon combination with Eqs. (3.8) and (16.171); by the same token, jθ =

− sin θ 2 + cos2 θ =

sin2 θ + cos2 θ = 1

16 176

again at the expense of Pythagoras’ theorem applied to Eq. (16.172) – so the base vectors are indeed unit vectors, as originally claimed. Equation (16.170) may be (promptly) rewritten as ix = j x ,

16 177

whereas Cramer’s rule, as per Eq. (7.59), may be applied to calculate iy and iz from ir and iθ, according to jr sin θ iy =

jθ cos θ

16 178

cos θ sin θ − sin θ cos θ

and cos θ jr iz =

− sin θ jθ cos θ

sin θ

,

16 179

− sin θ cos θ respectively – based on the set of linear equations on iy and iz, with jr and jθ serving as independent terms, labeled as Eqs. (16.171) and (16.172). Calculation of the second-order determinants in both numerator and denominator of Eqs. (16.178) and (16.179) unfolds iy =

jr cos θ −jθ sin θ cos θ cosθ − − sin θ sin θ

16 180

iz =

jθ cosθ − jr − sin θ , cos θ cosθ − − sin θ sin θ

16 181

and

respectively; Eq. (16.180) is, in turn, equivalent to iy =

jr cosθ − jθ sin θ = jr cosθ − jθ sin θ, cos2 θ + sin2 θ

16 182

697

698

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

while Eq. (16.181) breaks down to iz =

jθ cos θ + jr sin θ = jr sin θ + jθ cosθ cos2 θ + sin2 θ

16 183

– both at the expense again of Eq. (2.442). Therefore, if a vector u is defined as in Eq. (16.1), then Eqs. (16.177), (16.182), and (16.183) allow one to write u = jx ux + jr cos θ − jθ sin θ uy + jr sin θ + jθ cos θ uz ,

16 184

where algebraic rearrangement unfolds u = jx ux + jr uy cos θ + uz sin θ + jθ −uy sin θ + uz cos θ ;

16 185

Eq. (16.185) may also appear as u

jx ux + jr ur + jθ uθ ,

16 186

provided that ur = uy cos θ + uz sin θ

16 187

uθ = − uy sin θ + uz cos θ

16 188

and hold as definitions of radial and angular projections, respectively – which mimic Eqs. (16.171) and (16.172), after exchanging iy by uy and iz for uz. A similar rationale – i.e. replacement of jr and jθ by ur and uθ, respectively, obviously applies to components uy and uz in Eqs. (16.182) and (16.183), thus leading to uy = ur cos θ −uθ sin θ

16 189

uz = ur sin θ + uθ cosθ;

16 190

and the set of Eqs. (16.187) and (16.188), yielding ur ≡ ur{uy, uz} and uθ ≡ uθ{uy, uz}, is obviously equivalent to the set of Eqs. (16.189) and (16.190), conveying uy ≡ uy{ur, uθ} and uz ≡ uz{ur, uθ}. Based on Eqs. (16.166) and (16.169), one obtains ∂r = ∂y 2

2y y2 + z2

,

z ∂θ y2 = , ∂y z 2 1+ y

16 191



∂r = ∂z 2

2z y2 + z2

16 192

,

16 193

and ∂θ = ∂z

1 y 1+

z y

2

16 194

Vector Calculus

– at the expense of Eqs. (10.33), (10.192), and (10.205). Cancellation of 2 between numerator and denominator in Eqs. (16.191) and (16.193) leads to ∂r = ∂y

y y2

16 195

+ z2

and ∂r = ∂z

z y2 + z2

,

16 196

respectively – while multiplication of both numerator and denominator of Eqs. (16.192) and (16.194) by y2 gives rise to ∂θ z =− 2 2 ∂y y +z

16 197

∂θ y = , ∂z y2 + z2

16 198

and

respectively. In view of Eqs. (16.160) and (16.166), one may redo Eq. (16.195) to ∂r r cos θ = , ∂y r

16 199

whereas Eqs. (16.161) and (16.166) convert Eq. (16.197) to ∂θ r sin θ =− 2 ; ∂y r

16 200

Eqs. (16.161) and (16.166) allow, in turn, Eq. (16.196) be transformed ∂r r sin θ = , ∂z r

16 201

while Eq. (16.198) becomes ∂θ r cosθ = ∂z r2

16 202

at the expense of Eqs. (16.160) and (16.166). Cancellation of r between numerator and denominator of Eqs. (16.199)–(16.201) produces ∂r = cos θ, ∂y

16 203

∂θ sin θ =− , ∂y r

16 204

∂r = sin θ, ∂z

16 205

699

700

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

and ∂θ cosθ = , ∂z r

16 206

respectively – thus conveying the partial derivatives of the new cylindrical coordinates, r and θ, with regard to rectangular coordinates y and z, as (simple) functions of r and θ.

16.2.2

Redefinition of Nabla Operator, ∇

Once in possession of Eqs. (16.203)–(16.206), one may compute the divergent of a vector function u, as set by Eq. (16.14), by first expanding the derivatives with regard to x, y, and z via chain differentiation using r and θ as intermediate variables, viz. ∇ u=

∂ux ∂r ∂uy ∂θ ∂uy ∂r ∂uz ∂θ ∂uz + ; + + + ∂y ∂r ∂y ∂θ ∂z ∂r ∂z ∂θ ∂x

16 207

insertion now of Eqs. (16.203)–(16.206) unfolds ∇ u=

∂uy sin θ ∂uy ∂ux ∂uz cos θ ∂uz +cos θ − +sin θ + r ∂θ r ∂θ ∂x ∂r ∂r

16 208

or, in operator form, ∇ u=

∂ux ∂ sin θ ∂ ∂ cos θ ∂ + cos θ − uy + sin θ + uz ∂r r ∂θ ∂r r ∂θ ∂x

16 209

On the other hand, Eqs. (16.189) and (16.190) allow transformation of Eq. (16.209) to ∇ u=

∂ux ∂ sin θ ∂ + cos θ − ∂r r ∂θ ∂x ∂ cos θ ∂ + sin θ + ∂r r ∂θ

ur cos θ − uθ sin θ 16 210

ur sin θ + uθ cos θ

in (differential) operator form, where expansion of parentheses produces ∇ u=

∂ux ∂ ∂ sin θ ∂ + cosθ ur cos θ − cosθ uθ sin θ − ur cos θ ∂x ∂r ∂r r ∂θ +

sin θ ∂ ∂ ∂ uθ sin θ + sin θ ur sin θ + sin θ uθ cosθ r ∂θ ∂r ∂r

+

cosθ ∂ cos θ ∂ ur sin θ + uθ cos θ r ∂θ r ∂θ

;

16 211

factors independent of the differentiation variables may then be taken off the corresponding differential operators to give

Vector Calculus

∇ u=

∂ux ∂ur ∂uθ sin θ ∂ur −ur sin θ + cos θ + cos2 θ − sin θ cosθ − r ∂x ∂r ∂r ∂θ +

sin θ ∂uθ ∂ur ∂uθ uθ cos θ + sin θ + sin2 θ + sin θ cos θ r ∂θ ∂r ∂r

+

cos θ ∂ur cosθ ∂uθ + ur cosθ + sin θ − uθ sin θ + cosθ ∂θ ∂θ r r

,

16 212 where the rule of differentiation of a product was meanwhile applied to ∂(ur cos θ)/∂θ, ∂(uθ sin θ)/∂θ, ∂(ur sin θ)/∂θ, and ∂(uθ cos θ)/∂θ, and terms alike were lumped with each other. Upon factoring out each partial derivative, Eq. (16.212) becomes ∇ u=

∂ux sin2 θ cos2 θ ur + ur + + r r ∂x + cos2 θ + sin2 θ



sin θ cos θ sin θ cosθ uθ − uθ r r

∂ur ∂uθ + sin θ cos θ − sin θ cos θ ∂r ∂r

,

16 213

sin θ cosθ sin θ cos θ ∂ur sin2 θ cos2 θ ∂uθ − + + r r r r ∂θ ∂θ

which simplifies to ∇ u=

∂ux ur ∂ur 1 ∂uθ + + + ∂x r ∂r r ∂θ

16 214

in view of Eq. (2.442), coupled with cancellation of symmetrical terms; Eq. (16.214) may be reshaped to ∇ u=

∂ux 1 ∂ur 1 ∂uθ ur + r + + r ∂θ ∂x r ∂r

16 215

upon factoring out the reciprocal of r, and will finally appear as ∇ u=

∂ux 1 ∂ 1 ∂uθ rur + + r ∂θ ∂x r ∂r

16 216

at the expense of Eq. (10.119) again. In view of Eqs. (16.177), (16.182), and (16.183), one may rewrite Eq. (16.12) as ∇ = jx

∂ ∂ ∂ + jr cosθ − jθ sin θ + jr sin θ + jθ cos θ ; ∂x ∂y ∂z

16 217

sequential differentiation of variables y and z, with regard to the new variables r and θ, generates, in turn, ∇ = jx

∂ + jr cos θ −jθ sin θ ∂x

+ jr sin θ + jθ cos θ

∂r ∂ ∂θ ∂ + ∂y ∂r ∂y ∂θ

∂r ∂ ∂θ ∂ + ∂z ∂r ∂z ∂θ

,

16 218

701

702

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

whereas combination with Eqs. (16.203)–(16.206) supports transformation to ∇ = jx

∂ + jr cos θ −jθ sin θ ∂x

+ jr sin θ + jθ cosθ

cos θ

∂ sin θ ∂ − ∂r r ∂θ

16 219

∂ cos θ ∂ sin θ + ∂r r ∂θ

Sequential application of the distributive property of multiplication of scalar by vector, with regard to addition of either scalars or vectors, justifies transformation of Eq. (16.219) to ∇ = jx

∂ ∂ sin θ cos θ ∂ ∂ sin2 θ ∂ + jr cos2 θ −jr −jθ sin θ cos θ + jθ ∂x ∂r r ∂θ ∂r r ∂θ

∂ sin θ cosθ ∂ ∂ cos2 θ ∂ + jr sin θ + jr + jθ sin θ cos θ + jθ ∂r r ∂θ ∂r r ∂θ

16 220

2

or, after having unit vectors factored out and similar terms lumped together, ∇ = jx

∂ ∂ ∂ sin θ cos θ ∂ sin θ cos θ ∂ + jr cos2 θ + sin2 θ − + ∂x ∂r ∂r r ∂θ r ∂θ

+ jθ sin θ cos θ = jx

∂ +j ∂x r

+ jθ

∂ ∂ sin2 θ ∂ cos2 θ ∂ − sin θ cos θ + + ∂r ∂r r ∂θ r ∂θ

cos2 θ + sin2 θ

∂ + ∂r

sin θ cosθ − sin θ cos θ

sin θ cosθ sin θ cos θ ∂ − r r ∂θ

;

16 221

∂ sin2 θ cos2 θ ∂ + + ∂r r r ∂θ

upon application of the fundamental theorem of trigonometry, Eq. (16.221) breaks down to ∇ = jx

∂ ∂ 1 ∂ +j +j , ∂x r ∂r θ r ∂θ

16 222

where symmetrical terms were meanwhile dropped out. Based on the new definition of nabla using exclusively variables x, r, and θ – as conveyed by Eq. (16.222), one can as well calculate the Laplacian operator as per Eq. (16.64), revisited as ∇2 =

∂2 ∂ ∂ ∂ ∂ + + 2 ∂y ∂y ∂z ∂z ∂x

16 223

The rule of chain partial differentiation, via intermediate variables r and θ, transforms Eq. (16.223) to ∇2 =

∂2 ∂θ ∂ ∂r ∂ + + ∂y ∂θ ∂y ∂r ∂x2

∂θ ∂ ∂r ∂ ∂θ ∂ ∂r ∂ + + + ∂y ∂θ ∂y ∂r ∂z ∂θ ∂z ∂r

∂θ ∂ ∂r ∂ + , ∂z ∂θ ∂z ∂r 16 224

Vector Calculus

since Eqs. (16.160) and (16.161) indicate that y ≡ y{r, θ} and z ≡ z{r, θ}; insertion of Eqs. (16.203)–(16.206) yields ∇2 =

∂2 sin θ ∂ ∂ + cos θ + − r ∂θ ∂r ∂x2 +

cos θ ∂ ∂ + sin θ r ∂θ ∂r



sin θ ∂ ∂ + cos θ r ∂θ ∂r

cosθ ∂ ∂ + sin θ r ∂θ ∂r

16 225

One may now proceed with application of the distributive property to get ∇2 =

∂ 2 sin θ ∂ sin θ ∂ sin θ ∂ ∂ − cosθ − − 2 r ∂θ r ∂θ r ∂θ ∂r ∂x + cosθ

∂ sin θ ∂ ∂ ∂ − cos θ + cos θ ∂r r ∂θ ∂r ∂r

16 226

cos θ ∂ cos θ ∂ cosθ ∂ ∂ sin θ + + r ∂θ r ∂θ r ∂θ ∂r + sin θ

∂ cos θ ∂ ∂ ∂ sin θ + sin θ ∂r r ∂θ ∂r ∂r

from Eq. (16.225); since factors only in r behave as constants when differentiating with regard to θ, and vice versa, Eq. (16.226) degenerates to ∇2 =

∂2 sin θ ∂ ∂ sin θ ∂ ∂ sin θ cosθ + 2 − 2 r ∂θ ∂θ r ∂θ ∂r ∂x − sin θ cosθ

∂ 1 ∂ ∂ ∂ + cos2 θ ∂r r ∂θ ∂r ∂r

cos θ ∂ ∂ cosθ ∂ ∂ cosθ sin θ + 2 + r ∂θ ∂θ r ∂θ ∂r + sin θ cosθ

,

16 227

∂ 1 ∂ ∂ ∂ + sin2 θ ∂r r ∂θ ∂r ∂r

where ∂(r−1∂/∂θ)/∂r and ∂(∂/∂r)/∂r will factor out as ∇2 =

∂2 sin θ ∂ ∂ sin θ ∂ ∂ sin θ cosθ − + 2 r ∂θ ∂θ r ∂θ ∂r ∂x2 +

cos θ ∂ ∂ cosθ ∂ ∂ cos θ sin θ + 2 r ∂θ ∂θ r ∂θ ∂r

+ sin θ cos θ − sin θ cosθ

∂ 1 ∂ ∂ ∂ + sin2 θ + cos2 θ ∂r r ∂θ ∂r ∂r

16 228

703

704

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Application of the fundamental relationship of trigonometry, complemented by cancellation of symmetrical terms and definition of second-order derivative convert Eq. (16.228) to ∇2 =

∂2 sin θ ∂ ∂ sin θ ∂ ∂ sin θ cos θ + 2 − r ∂θ ∂θ r ∂θ ∂r ∂x2 cos θ ∂ ∂ cos θ ∂ ∂ ∂2 cosθ sin θ + + 2 + 2 r ∂θ ∂θ r ∂θ ∂r ∂r

;

16 229

the rule of differentiation of a product may now be invoked to write ∇2 =

∂2 sin θ ∂ ∂2 sin θ ∂ ∂2 + sin θ − sin θ + cos θ − + cosθ r2 ∂θ r ∂r ∂x2 ∂θ∂r ∂θ2 cos θ ∂ ∂2 cosθ ∂ ∂2 ∂2 + cos θ 2 + cos θ + sin θ + 2 + 2 − sin θ r ∂θ r ∂r ∂θ∂r ∂r ∂θ

,

16 230 whereas elimination of parentheses readily produces ∇2 =

∂2 sin θ cos θ ∂ sin2 θ ∂ 2 sin2 θ ∂ sin θ cos θ ∂ 2 + 2 − + + r2 ∂θ r ∂θ2 r ∂r r ∂x2 ∂θ∂r

sin θ cos θ ∂ cos2 θ ∂ 2 cos2 θ ∂ sin θ cos θ ∂ 2 ∂2 + + + + − r2 ∂θ r 2 ∂θ2 r ∂r r ∂θ∂r ∂r 2 When common terms are factored out, Eq. (16.231) becomes ∇2 =

16 231

∂2 1 ∂ + sin θ cosθ − sin θ cos θ ∂θ ∂x2 r 2 +

1 ∂2 1 ∂ 2 2 , sin θ + cos θ + sin2 θ + cos2 θ 2 2 r r ∂r ∂θ

16 232

1 ∂2 ∂2 sin θ cos θ − sin θ cosθ + 2 r ∂θ∂r ∂r so application of the fundamental relationship of trigonometry is again in order, viz. +

∂2 1 ∂2 1 ∂ ∂2 + + + 16 233 ∂x2 r 2 ∂θ2 r ∂r ∂r 2 along with cancellation of sin θ cos θ with its negative; once 1/r has been factored out, Eq. (16.233) turns to ∇2 =

∇2 =

∂2 1 ∂ ∂2 1 ∂2 +r 2 + 2 2, + 2 r ∂r r ∂θ ∂x ∂r

16 234

which is equivalent to ∇2 =

∂2 1 ∂ ∂ r + 2 r ∂r ∂r ∂x

+

1 ∂2 r 2 ∂θ2

16 235

in view of the derivative of a product – so a Laplacian becomes available that resorts directly to cylindrical coordinates.

Vector Calculus

16.3 Spherical Coordinates 16.3.1

Definition and Representation

When differential equations on (scalar) ψ or (vector) u describe problems characterized by spherical geometry, one may resort directly to grad or lap of ψ, or else div or curl of u in spherical polar coordinates instead of using Cartesian coordinates – since the resulting mathematical manipulations will turn out simpler. Remember that the position of a point P, characterized by rectangular coordinates (x,y,z), can be alternatively determined by coordinates (r,θ,ϕ) – where r denotes distance to the origin of the axes, θ denotes angle formed with (positive) z-axis by straight segment [0P∗] with P∗ denoting projection of P onto y0z plane, and ϕ denotes angle formed with (positive) x-axis by straight segment [0P∗∗] with P∗∗ denoting projection of P onto x0y plane – as highlighted in Fig. 16.3. As happened with the unit vectors relevant for cylindrical coordinates, kr, kθ, and kϕ are applied on point P itself, rather than on the (fixed) origin of the Cartesian axes; kr has the direction of the straight line defined by points 0 and P, whereas kθ has the direction tangent to the circumference on plane y0z passing hrough point P∗, and kϕ has the direction tangent to the circumference on plane x0y passing through point P∗∗. In view of Eqs. (2.288) and (2.290) applied twice, one may write x

r sin θ cosϕ,

16 236

y

r sin θ sin ϕ,

16 237

z

r cos θ

16 238

and

based on Fig. 16.3; after squaring both sides of Eqs. (16.236)–(16.238), and adding the result afterward, one obtains x2 + y2 + z2 = r 2 sin2 θ cos2 ϕ + r 2 sin2 θ sin2 ϕ + r 2 cos2 θ,

16 239

which may be algebraically rearranged to read x2 + y2 + z2 = r 2 sin2 θ sin2 ϕ + cos2 ϕ + cos2 θ x P**





kr

P (r,θ,ϕ)

Figure 16.3 Graphical representation of a general point P, via spherical coordinates (r,θ,ϕ) in the three-dimensional space, and its projections P∗ on the y0z plane and P∗∗ on the x0y plane – with indication of the canonical base of unit, mutually orthogonal vectors for said system of coordinates, i.e. (kr, kθ, kϕ).

16 240

ϕ

y

r

P* 0

z

705

706

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

once r2, and then sin2θ have been factored out. Recalling Eq. (2.442), one may simplify Eq. (16.240) to x2 + y2 + z2 = r 2 sin2 θ + cos2 θ = r 2

16 241

upon application thereof twice, or else r

x2 + y2 + z2

16 242

after taking square roots of both sides. If one had added only Eqs. (16.236) and (16.237) after squaring both sides, the result would have read x2 + y2 = r 2 sin2 θ cos2 ϕ + r 2 sin2 θ sin2 ϕ = r 2 sin2 θ sin2 ϕ + cos2 ϕ ,

16 243

where r2 and sin2θ were meanwhile factored out; Eq. (16.243) becomes x2 + y2 = r 2 sin2 θ

16 244

once Eq. (2.442) is again taken on board. Ordered division of Eq. (16.244), by Eq. (16.238) after squaring both sides, unfolds x2 + y2 r 2 sin2 θ = 2 2 r cos θ z2

16 245

– where cancellation of common factors between numerator and denominator leads to x2 + y2 sin θ 2 = ; cosθ z2

16 246

Eq. (2.299) allows conversion of Eq. (16.246) to tan2 θ =

x2 + y 2 , z2

16 247

or else tan θ =

x2 + y2 z

16 248

should square roots be taken of both sides. One finally obtains θ

tan− 1

x2 + y 2 , z

16 249

based on application of inverse tangent to both sides of Eq. (16.248). If ordered division of Eq. (16.237) by Eq. (16.236) had taken place, then one would have gotten y r sin θ sin ϕ = , x r sin θ cos ϕ

16 250

or else y sin ϕ = = tan ϕ x cos ϕ

16 251

Vector Calculus

– obtained after dropping r sin θ from both numerator and denominator, followed by application of the definition of tangent; its inverse function may again be applied to both sides of Eq. (16.251) to generate ϕ

tan− 1

y x

16 252

The set of unit vectors kr, kθ, and kϕ have been defined so as to be mutually orthogonal (for mathematical convenience) – as was the case of ix, iy, and iz pertaining to a Cartesian system; note that kr, kθ, and kϕ accompany point P (and thus change direction) when said point moves relative to the original Cartesian axes, similarly to jr and jθ in the case of cylindrical coordinates. Using plain trigonometric arguments, one may now proceed to kr

ix sin θ cos ϕ + iy sin θ sin ϕ + iz cos θ,

16 253



ix cosθ cos ϕ + iy cos θ sin ϕ −iz sin θ,

16 254



− ix sin ϕ + iy cos ϕ

16 255

and – stemming from Eqs. (16.236)–(16.238), respectively. While Eq. (16.253) is a mere result of the orthogonal projections onto the x-, y-, and z-axes, one may resort to the scalar product of kr by kθ, as given by Eq. (16.254), to get k r k θ = sin θ cos ϕ cosθ cos ϕ + sin θ sin ϕ cos θ sin ϕ + cosθ − sin θ

16 256

using Eqs. (16.253) and (16.254) – where Eq. (3.95) was taken advantage of; upon lumping factors alike, and factoring sin θ cos θ out afterward, Eq. (16.256) becomes k r k θ = sin θ cos θ cos2 ϕ + sin θ cos θ sin2 ϕ − sin θ cos θ =

sin2 ϕ + cos2 ϕ −1 sin θ cos θ = 1 − 1 sin θ cosθ = 0

,

16 257

again with the aid of Eq. (2.442). Equation (3.56) may then be invoked to conclude that kr and kθ are perpendicular to each other, since kr 0 and kθ 0 as per hypothesis. By the same token, one realizes that k θ k ϕ = cos θ cos ϕ − sin ϕ + cos θ sin ϕ cosϕ + − sin θ 0

16 258

based on Eqs. (16.254) and (16.255), where elimination of parentheses and factoring out of sin ϕ cos ϕ unfold k θ k ϕ = cos θ − cos θ sin ϕ cosϕ + 0 = 0

16 259

– so kθ and kϕ are also orthogonal to each other; while k r k ϕ = sin θ cos ϕ − sin ϕ + sin θ sin ϕ cos ϕ + cosθ 0,

16 260

stemming from Eqs. (16.253) and (16.255), degenerates to k r k ϕ = sin ϕ − sin ϕ sin θ cos ϕ + 0 = 0

16 261

upon factoring out sin θ cos ϕ – thus assuring orthogonality also between vectors kr and kϕ. On the other hand, kr =

sin2 θ cos2 ϕ + sin2 θ sin2 ϕ + cos2 θ

16 262

707

708

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

based on Eqs. (3.8) and (16.253); elementary algebraic rearrangement unfolds kr =

sin2 ϕ + cos2 ϕ sin2 θ + cos2 θ =

sin2 θ + cos2 θ = 1 = 1,

16 263

together with sequential application of the fundamental theorem of trigonometry. One similarly realizes that kθ =

cos2 θ cos2 ϕ + cos2 θ sin2 ϕ + − sin θ

2

16 264

in view of Eqs. (3.8) and (16.254) – which degenerates to kθ =

sin2 ϕ + cos2 ϕ cos2 θ + sin2 θ =

cos2 θ + sin2 θ = 1 = 1

16 265

– once cos2θ is factored out, and then Eq. (2.442) is applied with regard to ϕ and with regard to θ afterward. Finally, Eq. (16.255) gives rise to kϕ =

−sin ϕ 2 + cos2 ϕ

16 266

again in view of Eq. (3.8), which readily leads to kϕ =

sin2 ϕ + cos2 ϕ = 1 = 1

16 267

again at the expense of Eq. (2.442). Based on Eqs. (16.263), (16.265), and (16.267), one confirms that (kr, kθ, kϕ) constitutes indeed a set of unit vectors – further to being orthogonal to each other, as per Eqs. (16.257), (16.259), and (16.261). Cramer’s rule, as conveyed by Eq. (7.59), may now be applied to the set of Eqs. (16.253)–(16.255) on ix, iy, and iz as unknowns, and kr, kθ, and kϕ as independent terms – in attempts to obtain the new unit vectors as a function of the unit vectors in rectangular coordinates, according to k r sin θ sin ϕ

cos θ

k θ cos θ sin ϕ − sin θ ix =

cos ϕ 0 kϕ , sin θ cos ϕ sin θ sin ϕ cos θ

16 268

cos θ cos ϕ cos θ sin ϕ − sin θ − sin ϕ

cosϕ

sin θ cos ϕ k r

0 cosθ

cos θ cosϕ k θ − sin θ iy =

− sin ϕ k ϕ 0 , sin θ cos ϕ sin θ sin ϕ cos θ cos θ cosϕ cos θ sin ϕ − sin θ − sin ϕ

cosϕ

0

16 269

Vector Calculus

and sin θ cos ϕ sin θ sin ϕ k r cosθ cos ϕ cosθ sin ϕ k θ iz =

− sin ϕ cos ϕ sin θ cos ϕ sin θ sin ϕ

kϕ cos θ

16 270

cos θ cos ϕ cosθ sin ϕ − sin θ − sin ϕ

cos ϕ

0

The above third-order determinants can be ascertained via Laplace’s theorem labeled as Eq. (6.42) – developed to advantage along the last column, since it has a nil term in all but one case; this leads to cosθ ix = cosθ

cos θ



cos ϕ

cosθ cos ϕ cos θ sin ϕ − sin ϕ cosθ

iy =

k θ cos θ sin ϕ

cos ϕ

cosθ cos ϕ k θ − sin ϕ



cos θ cosϕ cos θ sin ϕ − sin ϕ

cosϕ

− − sin θ − − sin θ

− − sin θ − − sin θ

k r sin θ sin ϕ cos ϕ



sin θ cos ϕ sin θ sin ϕ − sin ϕ

,

16 271

,

16 272

cos ϕ

sin θ cos ϕ k r − sin ϕ



sin θ cos ϕ sin θ sin ϕ − sin ϕ

cos ϕ

and kr iz =

cos θ cosϕ cos θ sin ϕ − sin ϕ cos θ

cosϕ

− kθ

sin θ cosϕ sin θ sin ϕ − sin ϕ

cos θ cos ϕ cos θ sin ϕ − sin ϕ

cos ϕ

cos ϕ

− − sin θ

+ kϕ

sin θ cos ϕ sin θ sin ϕ cos θ cos ϕ cos θ sin ϕ

sin θ cosϕ sin θ sin ϕ − sin ϕ

cosϕ 16 273

based on Eqs. (16.268), (16.269), and (16.270), respectively. Recalling the definition of a second-order determinant as conveyed by Eq. (1.10), one can transform Eqs. (16.271)–(16.273) to ix =

cosθ k θ cos ϕ −k ϕ cos θ sin ϕ + sin θ k r cosϕ − k ϕ sin θ sin ϕ , cosθ cos θ cos2 ϕ + cosθ sin2 ϕ + sin θ sin θ cos2 ϕ + sin θ sin2 ϕ

16 274

iy =

cos θ k ϕ cosθ cos ϕ + k θ sin ϕ + sin θ k ϕ sin θ cos ϕ + k r sin ϕ , cos θ cos θ cos2 ϕ + cosθ sin2 ϕ + sin θ sin θ cos2 ϕ + sin θ sin2 ϕ

16 275

709

710

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

and k r cos θ cos2 ϕ +cos θ sin2 ϕ −k θ sin θ cos2 ϕ +sin θ sin2 ϕ iz =

+ k ϕ sin θ cos θ sin ϕ cosϕ − sin θ cos θ sin ϕ cos ϕ cosθ cosθ cos2 ϕ + cos θ sin2 ϕ + sin θ sin θ cos2 ϕ + sin θ sin2 ϕ

,

16 276

respectively; common terms may now be factored out in denominator and unit vectors in numerator, as well as parenthesis eliminated in numerator to yield ix =

k θ cosθ cos ϕ −k ϕ cos2 θ sin ϕ + k r sin θ cos ϕ −k ϕ sin2 θ sin ϕ , cos2 θ sin2 ϕ + cos2 ϕ + sin2 θ sin2 ϕ + cos2 ϕ

16 277

iy =

k ϕ cos2 θ cosϕ + k θ cosθ sin ϕ + k ϕ sin2 θ cosϕ + k r sin θ sin ϕ , cos2 θ sin2 ϕ + cos2 ϕ + sin2 θ sin2 ϕ + cos2 ϕ

16 278

iz =

k r cos θ sin2 ϕ + cos2 ϕ −k θ sin θ sin2 ϕ + cos2 ϕ cos2 θ sin2 ϕ + cos2 ϕ + sin2 θ sin2 ϕ + cos2 ϕ

16 279

and

from Eqs. (16.274)–(16.276), respectively – where symmetrical terms meanwhile cancelled out. Application of the fundamental relationship of trigonometry, along with factoring out of kϕ sin ϕ, kϕ cos ϕ, or sin2ϕ + cos2ϕ transforms Eqs. (16.277)– (16.279) to ix =

k r sin θ cos ϕ + k θ cos θ cosϕ − k ϕ sin ϕ sin2 θ + cos2 θ , sin2 θ + cos2 θ

16 280

iy =

k r sin θ sin ϕ + k θ cos θ sin ϕ + k ϕ cos ϕ sin2 θ + cos2 θ , sin2 θ + cos2 θ

16 281

and iz =

k r cos θ − k θ sin θ sin2 ϕ + cos2 ϕ , sin2 θ + cos2 θ

16 282

respectively – whereas application once more of said relationship permits further simplification to ix = k r sin θ cos ϕ + k θ cosθ cos ϕ −k ϕ sin ϕ,

16 283

iy = k r sin θ sin ϕ + k θ cosθ sin ϕ + k ϕ cos ϕ,

16 284

iz = k r cos θ −k θ sin θ,

16 285

and respectively – thus generating ix ≡ ix{kr, kθ, kϕ}, iy ≡ iy{kr, kθ, kϕ}, and iz ≡ iz{kr, kθ}. Consider now a vector u, defined as in Eq. (16.1); following insertion of Eqs. (16.283)– (16.285), one obtains u = k r sin θ cosϕ + k θ cos θ cos ϕ − k ϕ sin ϕ ux + k r sin θ sin ϕ + k θ cos θ sin ϕ + k ϕ cosϕ uy + k r cosθ − k θ sin θ uz

16 286

Vector Calculus

or, after factoring kr, kθ, and kϕ out, u = k r ux sin θ cos ϕ + uy sin θ sin ϕ + uz cosθ + k θ ux cos θ cosϕ + uy cos θ sin ϕ −uz sin θ

16 287

+ k ϕ −ux sin ϕ + uy cosϕ Equation (16.287) may be reformulated as u

k r ur + k θ uθ + k ϕ uϕ ,

16 288

provided that ur ux sin θ cos ϕ + uy sin θ sin ϕ + uz cosθ,

16 289



ux cos θ cosϕ + uy cosθ sin ϕ − uz sin θ,

16 290



− ux sin ϕ + uy cos ϕ

16 291

and hold as definitions of the new coordinates ur, uθ, and uϕ of vector u – thus echoing Eqs. (16.253)–(16.255), after replacing kζ by uζ and iξ by uξ. As anticipated, a similar rationale departing from Eqs. (16.283)–(16.285) will give rise to

and

ux = ur sin θ cosϕ + uθ cos θ cosϕ − uϕ sin ϕ,

16 292

uy = ur sin θ sin ϕ + uθ cos θ sin ϕ + uϕ cos ϕ,

16 293

uz = ur cosθ − uθ sin θ

16 294

– thus allowing ux, uy, and uz, respectively, be written as a function of ur, uθ, and uϕ. Once in possession of Eqs. (16.242), (16.249), and (16.252), one may calculate the corresponding partial derivatives as ∂r = ∂x 2

x2 + y2 + z2

∂r = ∂y 2

x2

2x 2y + y2 + z2

,

16 295

,

16 296

∂r 2z = , 2 ∂z 2 x + y2 + z2 1 2x z 2 x2 + y 2 ∂θ = 2, ∂x x2 + y2 1+ z ∂θ = ∂y

1 2y z 2 x2 + y 2 1+

x2 + y2 z

2,

16 297

16 298

16 299

711

712

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

∂θ = ∂z

x2 + y2 −

x2 + y2 z

1+ ∂ϕ = ∂x

1 z2

2,

16 300

y x2 , y 2 1+ x −

16 301

and ∂ϕ = ∂y

1 x

16 302 y 2 x – with ∂ϕ/∂z being nil (and thus redundant) in view of ϕ ≡ ϕ{x, y} as per Eq. (16.252). Cancellation of 2 between numerator and denominator of Eqs. (16.295)–(16.297) yields 1+

∂r = ∂x

x2

∂r = ∂y

x2

∂r = ∂z

x2 + y 2 + z 2

x + y2 + z2 y + y2 + z2

,

16 303

,

16 304

,

16 305

and z

respectively; a similar procedure, complemented with calculation of the stated squares in the denominator, transforms Eqs. (16.298) and (16.299) to x ∂θ z x2 + y2 = ∂x x2 + y2 1+ z2 and

16 306

y

∂θ z x2 + y2 = , 16 307 ∂y x2 + y2 1+ z2 respectively – whereas further multiplication of numerator and denominator of Eqs. (16.306) and (16.307) by z2 produces ∂θ xz = ∂x x2 + y2 + z2

x2 + y2

∂θ yz = 2 2 ∂y x + y + z2

x2 + y2

16 308

and ,

16 309

Vector Calculus

respectively. Finally, calculation of the stated squares in the denominator of Eqs. (16.300)– (16.302) generates x2 + y2 z2 , x2 + y2 1+ z2 y 2 ∂ϕ = − x 2, ∂x y 1+ 2 x ∂θ =− ∂z

16 310

16 311

and ∂ϕ = ∂y

1 x 1+

y2 x2

,

16 312

respectively; a further multiplication of numerator and denominator of Eq. (16.310) by z2 leads to x2 + y2 ∂θ = − 2 2 2, ∂z x +y +z

16 313

while multiplication of numerator and denominator of Eqs. (16.311) and (16.312) by x2 gives rise to ∂ϕ y =− 2 2 ∂x x +y

16 314

∂ϕ x = , ∂y x2 + y2

16 315

and

respectively. Recalling now Eqs. (16.236) and (16.242), one gets ∂r r sin θ cos ϕ = ∂x r

16 316

from Eq. (16.303) or, after cancelling r between numerator and denominator, ∂r = sin θ cos ϕ; ∂x

16 317

Eqs. (16.237) and (16.242) allow, in turn, transformation of Eq. (16.304) to ∂r r sin θ sin ϕ = , ∂y r

16 318

where similar factors may drop out as ∂r = sin θ sin ϕ ∂y

16 319

713

714

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Equation (16.305) becomes, in turn, ∂r r cos θ = ∂z r

16 320

with the aid of Eqs. (16.238) and (16.242), where dropping of r twice yields merely ∂r = cos θ; ∂z

16 321

on the other hand, Eqs. (16.236), (16.242), and (16.248) give rise to ∂θ zr sin θ cos ϕ = ∂x r 2 z tan θ

16 322

stemming from Eq. (16.308), where cancellation of z and r between numerator and denominator, and explicitation of tan θ as per Eq. (2.299) lead to ∂θ sin θ cos ϕ = sin θ ∂x r cosθ

16 323

– which further reduces to ∂θ cosθ cos ϕ = ∂x r

16 324

upon algebraic rearrangement. Insertion of Eqs. (16.237), (16.242), and (16.248) converts Eq. (16.309) to ∂θ zr sin θ sin ϕ = 2 , ∂y r z tan θ

16 325

where r and z cancel out between numerator and denominator, and the definition of tangent may be retrieved to get ∂θ sin θ sin ϕ = sin θ ∂y r cos θ

16 326

– or, equivalently, ∂θ cosθ sin ϕ = ∂y r

16 327

upon multiplication of both numerator and denominator by cos θ, and division thereof by sin θ; Eqs. (16.242) and (16.248) allow, in turn, transformation of Eq. (16.313) to ∂θ z tan θ =− 2 , ∂z r

16 328

Vector Calculus

where Eqs. (2.299) and (16.238) allow further transformation to sin θ r cos θ ∂θ cos θ =− r2 ∂z

16 329

– or, upon cancellation of cos θ between numerator and denominator, ∂θ sin θ =− ∂z r

16 330

With regard to Eq. (16.314), one gets ∂ϕ r sin θ sin ϕ =− 2 2 ∂x z tan θ

16 331

at the expense of Eqs. (16.237) and (16.247), where Eq. (16.238) and explicitation of tan θ through basic trigonometric functions support ∂ϕ r sin θ sin ϕ =− ; ∂x sin2 θ 2 2 r cos θ cos2 θ

16 332

after dropping factors alike, Eq. (16.332) reduces to ∂ϕ sin ϕ =− ∂x r sin θ

16 333

Finally, Eqs. (16.236) and (16.247) may to advantage be used to transform Eq. (16.315) to ∂ϕ r sin θ cosϕ = 2 2 , ∂y z tan θ

16 334

where Eq. (16.238) may again be called upon to write ∂ϕ r sin θ cosϕ = ∂y sin2 θ r 2 cos2 θ cos2 θ

16 335

along with the definition of tangent; Eq. (16.335) breaks down to ∂ϕ cos ϕ = , ∂y r sin θ

16 336

following cancellation of r sin θ and cos2θ between numerator and denominator.

16.3.2

Redefinition of Nabla Operator, ∇

The divergent of a vector function u, conveyed by Eq. (16.14), may also be expressed as a function solely of spherical coordinates; expansion of the derivatives therein with regard

715

716

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

to x, y, and z, via the chain (partial) differentiation rule using r, θ, and ϕ as intermediate variables, will be useful in such a quest, viz. ∇ u=

∂r ∂ux ∂θ ∂ux ∂ϕ ∂ux ∂r ∂uy ∂θ ∂uy ∂ϕ ∂uy + + + + + ∂x ∂r ∂x ∂θ ∂x ∂ϕ ∂y ∂r ∂y ∂θ ∂y ∂ϕ 16 337

∂r ∂uz ∂θ ∂uz ∂ϕ ∂uz + + + ∂z ∂r ∂z ∂θ ∂z ∂ϕ

Upon combination with Eqs. (16.317), (16.319), (16.321), (16.324), (16.327), (16.330), (16.333), and (16.336), one can transform Eq. (16.337) to ∇ u = sin θ cos ϕ

∂ux cos θ cos ϕ ∂ux sin ϕ ∂ux + − r ∂r ∂θ r sin θ ∂ϕ

+ sin θ sin ϕ + cos θ

∂uy cos θ sin ϕ ∂uy cos ϕ ∂uy + + , r ∂r ∂θ r sin θ ∂ϕ

16 338

∂uz sin θ ∂uz ∂uz − +0 r ∂θ ∂r ∂ϕ

where ∂ϕ/∂z = 0 for ϕ ≡ ϕ{x, y} as per Eq. (16.252) as stressed before; in operator form, Eq. (16.338) will look like ∇ u = sin θ cos ϕ

∂ cosθ cos ϕ ∂ sin ϕ ∂ + − ux ∂r r ∂θ r sin θ ∂ϕ

+ sin θ sin ϕ + cos θ

∂ cosθ sin ϕ ∂ cos ϕ ∂ + + uy ∂r r ∂θ r sin θ ∂ϕ

16 339

∂ sin θ ∂ − uz ∂r r ∂θ

On the other hand, Eqs. (16.292)–(16.294) support transformation of Eq. (16.339) to

∇ u = sin θ cos ϕ

∂ cos θ cosϕ ∂ sin ϕ ∂ + − ∂r r ∂θ r sin θ ∂ϕ

+ sin θ sin ϕ

+ cosθ

∂ cos θ sin ϕ ∂ cos ϕ ∂ + + ∂r r ∂θ r sin θ ∂ϕ

∂ sin θ ∂ − ∂r r ∂θ

ur sin θ cos ϕ + uθ cos θ cosϕ −uϕ sin ϕ ur sin θ sin ϕ + uθ cosθ sin ϕ + uϕ cos ϕ

,

ur cos θ −uθ sin θ 16 340

Vector Calculus

where removal of parentheses generates ∂ ∂ ur sin θ cos ϕ + sin θ cosϕ uθ cos θ cosϕ ∇ u = sin θ cos ϕ ∂r ∂r ∂ cos θ cosϕ ∂ uϕ sin ϕ + ur sin θ cos ϕ − sin θ cos ϕ ∂r r ∂θ cos θ cos ϕ ∂ cos θ cos ϕ ∂ uθ cos θ cos ϕ − uϕ sin ϕ + r ∂θ r ∂θ sin ϕ ∂ sin ϕ ∂ ur sin θ cos ϕ − uθ cos θ cos ϕ − r sin θ ∂ϕ r sin θ ∂ϕ +

sin ϕ ∂ ∂ uϕ sin ϕ + sin θ sin ϕ ur sin θ sin ϕ r sin θ ∂ϕ ∂r

∂ ∂ uθ cos θ sin ϕ + sin θ sin ϕ uϕ cos ϕ + sin θ sin ϕ ∂r ∂r cos θ sin ϕ ∂ cosθ sin ϕ ∂ ur sin θ sin ϕ + uθ cos θ sin ϕ + r ∂θ r ∂θ cos θ sin ϕ ∂ cos ϕ ∂ uϕ cos ϕ + ur sin θ sin ϕ + r ∂θ r sin θ ∂ϕ +

;

cos ϕ ∂ cos ϕ ∂ ∂ uθ cos θ sin ϕ + uϕ cos ϕ + cos θ ur cosθ r sin θ ∂ϕ r sin θ ∂ϕ ∂r

− cos θ

∂ sin θ ∂ sin θ ∂ uθ sin θ − ur cosθ + uθ sin θ ∂r r ∂θ r ∂θ

16 341 after realizing that the derivative with regard to ξ of a function that depends only on ζ and χ as independent variables is nil, Eq (16.341) becomes ∂ur ∂uθ ∂uϕ + sin θ cos θ cos2 ϕ − sin θ sin ϕ cos ϕ ∇ u = sin2 θ cos2 ϕ ∂r ∂r ∂r cosθ cos2 ϕ ∂ cos θ cos2 ϕ ∂ cos θ sin ϕ cosϕ ∂uϕ ur sin θ + uθ cosθ − + r ∂θ r ∂θ r ∂θ sin θ sin ϕ ∂ cosθ sin ϕ ∂ sin ϕ ∂ ur cos ϕ − uθ cos ϕ + uϕ sin ϕ − r sin θ ∂ϕ r sin θ ∂ϕ r sin θ ∂ϕ + sin2 θ sin2 ϕ

∂ur ∂uθ ∂uϕ + sin θ cos θ sin2 ϕ + sin θ sin ϕ cos ϕ ∂r ∂r ∂r

cosθ sin2 ϕ ∂ cosθ sin2 ϕ ∂ ur sin θ + uθ cosθ + r ∂θ r ∂θ cosθ sin ϕ cos ϕ ∂uϕ sin θ cos ϕ ∂ ur sin ϕ + + r r sin θ ∂ϕ ∂θ +

,

cosθ cos ϕ ∂ cos ϕ ∂ uθ sin ϕ + uϕ cosϕ r sin θ ∂ϕ r sin θ ∂ϕ

+ cos2 θ

∂ur ∂uθ sin θ ∂ sin θ ∂ ur cos θ + uθ sin θ − sin θ cos θ − r ∂θ r ∂θ ∂r ∂r 16 342

717

718

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

also with the aid of Eq. (10.120). Partial differentiation of the outstanding products in Eq. (16.342) will now follow as ∇ u = sin2 θ cos2 ϕ

∂ur ∂uθ ∂uϕ + sin θ cos θ cos2 ϕ − sin θ sin ϕ cos ϕ ∂r ∂r ∂r

+

cosθ cos2 ϕ ∂ur cos θ cos2 ϕ ∂uθ sin θ cos θ + ur cos θ + − uθ sin θ r r ∂θ ∂θ



cos θ sin ϕ cosϕ ∂uϕ sin θ sin ϕ ∂ur cos ϕ − − ur sin ϕ r r sin θ ∂θ ∂ϕ



cos θ sin ϕ ∂uθ sin ϕ ∂uϕ cos ϕ sin ϕ − uθ sin ϕ + + uϕ cosϕ r sin θ r sin θ ∂ϕ ∂ϕ

+ sin2 θ sin2 ϕ

∂ur ∂uθ ∂uϕ + sin θ cos θ sin2 ϕ + sin θ sin ϕ cos ϕ ∂r ∂r ∂r

+

cosθ sin2 ϕ ∂ur cos θ sin2 ϕ ∂uθ sin θ cos θ + ur cos θ + − uθ sin θ r r ∂θ ∂θ

+

cosθ sin ϕ cos ϕ ∂uϕ sin θ cos ϕ ∂ur sin ϕ + + ur cosϕ r r sin θ ∂θ ∂ϕ

+

cosθ cos ϕ ∂uθ cos ϕ ∂uϕ sin ϕ cos ϕ + uθ cos ϕ + − uϕ sin ϕ r sin θ r sin θ ∂ϕ ∂ϕ

+ cos2 θ +

∂ur ∂uθ sin θ ∂ur cos θ − sin θ cos θ − − ur sin θ r ∂r ∂r ∂θ

sin θ ∂uθ sin θ + uθ cosθ r ∂θ 16 343

Elimination of parentheses transforms Eq. (16.343) to ∇ u = sin2 θ cos2 ϕ

∂ur ∂uθ ∂uϕ + sin θ cosθ cos2 ϕ − sin θ sin ϕ cos ϕ ∂r ∂r ∂r

+

sin θ cos θ cos2 ϕ ∂ur cos2 θ cos2 ϕ cos2 θ cos2 ϕ ∂uθ ur + + r r r ∂θ ∂θ



sin θ cos θ cos2 ϕ cosθ sin ϕ cos ϕ ∂uϕ sin θ sin ϕ cosϕ ∂ur uθ − − r r r sin θ ∂θ ∂ϕ

+

sin θ sin2 ϕ cos θ sin ϕ cos ϕ ∂uθ cos θ sin2 ϕ ur − uθ + r sin θ r sin θ r sin θ ∂ϕ

+

sin2 ϕ ∂uϕ sin ϕ cos ϕ ∂ur uϕ + sin2 θ sin2 ϕ + r sin θ ∂ϕ r sin θ ∂r

+ sin θ cos θ sin2 ϕ

∂uθ ∂uϕ + sin θ sin ϕ cos ϕ ∂r ∂r

Vector Calculus

+

sin θ cosθ sin2 ϕ ∂ur cos2 θ sin2 ϕ cos2 θ sin2 ϕ ∂uθ ur + + r r r ∂θ ∂θ



sin θ cos θ sin2 ϕ cos θ sin ϕ cos ϕ ∂uϕ sin θ sin ϕ cos ϕ ∂ur uθ + + r r r sin θ ∂θ ∂ϕ

+

sin θ cos2 ϕ cos θ sin ϕ cosϕ ∂uθ cos θ cos2 ϕ ur + uθ + r sin θ r sin θ r sin θ ∂ϕ

+

cos2 ϕ ∂uϕ sin ϕ cosϕ ∂ur ∂uθ uϕ + cos2 θ − − sin θ cosθ r sin θ ∂ϕ r sin θ ∂r ∂r



sin θ cosθ ∂ur sin2 θ sin2 θ ∂uθ sin θ cos θ + + ur + uθ ∂θ r r r ∂θ r

;

16 344

Eq. (16.344) may be reshaped via factoring out ur, uθ, uϕ, ∂ur/∂r, ∂ur/∂θ, ∂ur/∂ϕ, ∂uθ/∂r, ∂uθ/∂θ, ∂uθ/∂ϕ, ∂uϕ/∂r, ∂uϕ/∂θ, and ∂uϕ/∂ϕ to yield cos2 θ cos2 ϕ sin2 ϕ cos2 θ sin2 ϕ cos2 ϕ sin2 θ + + + + ur r r r r r

∇ u=

− +

sin θ cos θ cos2 ϕ cos θ sin2 ϕ + r r sin θ

sin θ cos θ sin2 ϕ cosθ cos2 ϕ sin θ cosθ − + + r r sin θ r +



sin ϕ cosϕ sin ϕ cos ϕ ∂ur − uϕ + sin2 θ cos2 ϕ + sin2 θ sin2 ϕ + cos2 θ r sin θ r sin θ ∂r

+ sin θ cos θ cos2 ϕ + sin θ cosθ sin2 ϕ − sin θ cos θ + − sin θ sin ϕ cosϕ + sin θ sin ϕ cos ϕ

∂uθ ∂r

∂uϕ ∂r

+

sin θ cos θ cos2 ϕ sin θ cos θ sin2 ϕ sin θ cos θ ∂ur + − r r r ∂θ

+

cos2 θ cos2 ϕ cos2 θ sin2 ϕ sin2 θ ∂uθ + + r r r ∂θ

,

+ −

cos θ sin ϕ cosϕ cos θ sin ϕ cosϕ ∂uϕ + r r ∂θ

+ −

sin θ sin ϕ cos ϕ sin θ sin ϕ cosϕ ∂ur + r sin θ r sin θ ∂ϕ

+ −

cos θ sin ϕ cosϕ cos θ sin ϕ cosϕ ∂uθ sin2 ϕ cos2 ϕ ∂uϕ + + + r sin θ r sin θ r sin θ r sin θ ∂ϕ ∂ϕ 16 345

719

720

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

along with topical cancellation of common factors between numerator and denominator (when appropriate). A further effort of factoring out supports conversion of Eq. (16.345) to cos2 θ 1 sin2 θ sin2 ϕ + cos2 ϕ + sin2 ϕ + cos2 ϕ + ur r r r

∇ u=

+ −

sin θ cos θ cos θ sin θ cos θ sin2 ϕ + cos2 ϕ + sin2 ϕ + cos2 ϕ + uθ r r sin θ r

+ sin2 θ sin2 ϕ + cos2 ϕ + cos2 θ

∂ur ∂r

∂uθ + sin θ cos θ sin ϕ + cos ϕ − sin θ cos θ ∂r 2

,

2

+

sin θ cosθ sin θ cosθ ∂ur sin2 ϕ + cos2 ϕ − r r ∂θ

+

cos2 θ sin2 θ ∂uθ 1 ∂uϕ sin2 ϕ + cos2 ϕ + sin2 ϕ + cos2 ϕ + r r ∂θ r sin θ ∂ϕ 16 346

while sin ϕ cos ϕ/r sin θ, sin θ sin ϕ cos ϕ, cos θ sin ϕ cos ϕ/r, sin θ sin ϕ cos ϕ/r sin θ, and cos θ sin ϕ cos ϕ/r sin θ meanwhile cancelled out with their negatives; application of the fundamental law of trigonometry permits simplification of Eq. (16.346) to sin2 θ cos2 θ 1 sin θ cosθ sin θ cos θ cosθ + + ur + − + uθ r r r r r r sin θ

∇ u=

+ sin2 θ + cos2 θ +

∂ur ∂uθ + sin θ cos θ − sin θ cos θ ∂r ∂r

sin θ cosθ sin θ cos θ ∂ur sin2 θ cos2 θ ∂uθ 1 ∂uϕ − + + + r r r r ∂θ ∂θ r sin θ ∂ϕ 16 347

– while cancellation of symmetrical terms, followed by application of Eq. (2.442) transform Eq. (16.347) to merely ∇ u=

1 1 cos θ ∂ur 1 ∂uθ 1 ∂uϕ + ur + uθ + + + r r r sin θ ∂r r ∂θ r sin θ ∂ϕ

16 348

Equation (16.348) may be algebraically rearranged as ∇ u=

2 ∂ur 1 ∂uθ cos θ 1 ∂uϕ ur + uθ + + + , r r ∂θ r sin θ r sin θ ∂ϕ ∂r

16 349

where factoring out of 1/r2 in the first parenthesis and 1/r sin θ in the second one unfold ∇ u=

1 ∂ur 1 ∂uθ 1 ∂uϕ sin θ + + uθ cos θ + ; 2rur + r 2 r2 r sin θ r sin θ ∂ϕ ∂r ∂θ

16 350

Vector Calculus

a final step of condensation yields ∇ u=

1 ∂ 2 1 ∂ 1 ∂uϕ r ur + uθ sin θ + 2 r ∂r r sin θ ∂θ r sin θ ∂ϕ

16 351

with the aid of the rule of differentiation of a product of functions, thus generating an expression for the divergent of a vector function u that resorts to spherical coordinates only. If the operator nabla itself is of interest, then one should insert Eqs. (16.283)–(16.285) in Eq. (16.12) to get ∇ = k r sin θ cos ϕ + k θ cos θ cos ϕ− k ϕ sin ϕ

∂ ∂x

+ k r sin θ sin ϕ + k θ cosθ sin ϕ + k ϕ cos ϕ + k r cos θ −k θ sin θ

∂ ; ∂y

16 352

∂ ∂z

the outstanding differential operators, in rectangular coordinates, will then appear as ∇ = k r sin θ cos ϕ + k θ cos θ cos ϕ− k ϕ sin ϕ

∂r ∂ ∂θ ∂ ∂ϕ ∂ + + ∂x ∂r ∂x ∂θ ∂x ∂ϕ

+ k r sin θ sin ϕ + k θ cosθ sin ϕ + k ϕ cos ϕ + k r cos θ −k θ sin θ

∂r ∂ ∂θ ∂ ∂ϕ ∂ + + ∂y ∂r ∂y ∂θ ∂y ∂ϕ

∂r ∂ ∂θ ∂ ∂ϕ ∂ + + ∂z ∂r ∂z ∂θ ∂z ∂ϕ 16 353

via chain (partial) differentiation using r, θ, and ϕ as intermediate variables. One may now resort to Eqs. (16.317), (16.319), (16.321), (16.324), (16.327), (16.330), (16.333), ∂ϕ = 0 as per Eq. (16.252), to reformulate Eq. (16.353) to and (16.336) – besides ∂z ∇ = k r sin θ cos ϕ + k θ cos θ cos ϕ− k ϕ sin ϕ sin θ cos ϕ

∂ cos θ cos ϕ ∂ sin ϕ ∂ + − ∂r r ∂θ r sin θ ∂ϕ

+ k r sin θ sin ϕ + k θ cosθ sin ϕ + k ϕ cos ϕ ∂ cos θ sin ϕ ∂ cos ϕ ∂ + sin θ sin ϕ + ∂r r ∂θ r sin θ ∂ϕ + k r cos θ −k θ sin θ

cosθ

∂ sin θ ∂ − ∂r r ∂θ

;

16 354

721

722

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

sequential application of the distributive property of multiplication, with regard to addition of vectors and scalars at a time, allows transformation of Eq. (16.354) to ∂ sin θ cos θ cos2 ϕ ∂ sin θ sin ϕ cos ϕ ∂ − ∇ = k r sin2 θ cos2 ϕ + ∂r r ∂θ r sin θ ∂ϕ + k θ sin θ cos θ cos2 ϕ − k ϕ sin θ sin ϕ cos ϕ + k r sin2 θ sin2 ϕ

∂ cos2 θ cos2 ϕ ∂ cos θ sin ϕ cos ϕ ∂ + − ∂r r ∂θ r sin θ ∂ϕ

∂ cos θ sin ϕ cosϕ ∂ sin2 ϕ ∂ + − ∂r r ∂θ r sin θ ∂ϕ

∂ sin θ cosθ sin2 ϕ ∂ sin θ sin ϕ cos ϕ ∂ + + ∂r r ∂θ r sin θ ∂ϕ

+ k θ sin θ cos θ sin2 ϕ

∂ cos2 θ sin2 ϕ ∂ cosθ sin ϕ cos ϕ ∂ + + ∂r r ∂θ r sin θ ∂ϕ

+ k ϕ sin θ sin ϕ cosϕ

∂ cos θ sin ϕ cos ϕ ∂ cos2 ϕ ∂ + + ∂r r ∂θ r sin θ ∂ϕ

+ k r cos2 θ

16 355

∂ sin θ cosθ ∂ ∂ sin2 θ ∂ − −k θ sin θ cos θ − ∂r r ∂θ ∂r r ∂θ

Factoring out of unit vectors is now in order – besides dropping of common factors between numerator and denominator, in attempts to algebraically reorganize Eq. (16.355), viz. ∂ sin2 θ cos2 ϕ + sin2 θ sin2 ϕ + cos2 θ ∂r ∇ = kr

+

sin θ cos θ cos2 ϕ sin θ cosθ sin2 ϕ sin θ cosθ ∂ + − ∂θ r r r



sin ϕ cos ϕ sin ϕ cos ϕ ∂ − r r ∂ϕ sin θ cosθ cos2 ϕ + sin θ cos θ sin2 ϕ − sin θ cos θ

+ kθ

+

cos2 θ cos2 ϕ cos2 θ sin2 ϕ sin2 θ ∂ + + r r r ∂θ



cos θ sin ϕ cosϕ cos θ sin ϕ cos ϕ ∂ − r sin θ r sin θ ∂ϕ

sin θ sin ϕ cosϕ − sin θ sin ϕ cos ϕ − kϕ

∂ ∂r

+

cos θ sin ϕ cosϕ cos θ sin ϕ cos ϕ ∂ − r r ∂θ



sin2 ϕ cos2 ϕ ∂ + r sin θ r sin θ ∂ϕ

∂ ∂r ;

16 356

Vector Calculus

further factoring out wherever possible, followed by cancellation of symmetrical terms permit simplification to

∂ + sin2 θ sin2 ϕ + cos2 ϕ + cos2 θ ∂r

∇ = kr

+ k θ sin θ cosθ sin2 ϕ + cos2 ϕ− 1 + kϕ

sin θ cos θ sin2 ϕ + cos2 ϕ r −

sin θ cosθ r

∂ ∂θ

∂ cos2 θ sin2 θ ∂ + sin2 ϕ + cos2 ϕ + ∂r r r ∂θ

1 ∂ sin2 ϕ + cos2 ϕ r sin θ ∂ϕ 16 357

In view of the fundamental law of trigonometry, Eq. (16.357) will acquire the form sin2 θ + cos2 θ

∇ = kr

∂ + ∂r

sin θ cos θ sin θ cosθ ∂ − r r ∂θ ;

+ kθ

16 358

∂ sin2 θ cos2 θ ∂ 1 ∂ + + sin θ cosθ 1 − 1 + kϕ ∂r r r ∂θ r sin θ ∂ϕ

symmetrical terms are now due to cancel out, while Eq. (2.442) may be reutilized to obtain ∇ = kr

∂ 1 ∂ 1 ∂ + kθ + kϕ ∂r r ∂θ r sin θ ∂ϕ

16 359

– an alternative form of del, hereby expressed in spherical coordinates. One is finally in position to calculate the Laplacian operator, after revisiting Eq. (16.223) as ∇2 =

∂ ∂ ∂ ∂ ∂ ∂ + + ∂x ∂x ∂y ∂y ∂z ∂z

16 360

– where the rule of chain (partial) differentiation supports ∂r ∂ ∂θ ∂ ∂ϕ ∂ + + ∂x ∂r ∂x ∂θ ∂x ∂ϕ

∇2 = +

∂r ∂ ∂θ ∂ ∂ϕ ∂ + + ∂y ∂r ∂y ∂θ ∂y ∂ϕ

+

∂r ∂ ∂θ ∂ + ∂z ∂r ∂z ∂θ

∂r ∂ ∂θ ∂ ∂ϕ ∂ + + ∂x ∂r ∂x ∂θ ∂x ∂ϕ ∂r ∂ ∂θ ∂ ∂ϕ ∂ + + , ∂y ∂r ∂y ∂θ ∂y ∂ϕ

16 361

∂r ∂ ∂θ ∂ + ∂z ∂r ∂z ∂θ

where (∂ϕ/∂z)(∂/∂ϕ) was omitted because ∂ϕ/∂z is nil as per Eq. (16.252); in view of Eqs. (16.317), (16.319), (16.321), (16.324), (16.327), (16.330), (16.333), and (16.336), one may expand Eq. (16.361) as

723

724

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

∇2 = sin θ cos ϕ

∂ cos θ cosϕ ∂ sin ϕ ∂ + − ∂r r ∂θ r sin θ ∂ϕ

sin θ cosϕ

∂ cos θ cos ϕ ∂ sin ϕ ∂ + − ∂r r ∂θ r sin θ ∂ϕ

+ sin θ sin ϕ sin θ sin ϕ + cos θ

∂ cos θ sin ϕ ∂ cosϕ ∂ + + ∂r r ∂θ r sin θ ∂ϕ

16 362

∂ cos θ sin ϕ ∂ cos ϕ ∂ + + ∂r r ∂θ r sin θ ∂ϕ

∂ sin θ ∂ − ∂r r ∂θ

cosθ

∂ sin θ ∂ − ∂r r ∂θ

The linearity of the differential operators and the underlying distributive property allow transformation of Eq. (16.362) to ∇2 = sinθ cos ϕ

∂ ∂ ∂ cosθ cosϕ ∂ + sinθ cos ϕ sinθ cos ϕ ∂r ∂r ∂r r ∂θ

− sin θ cos ϕ

∂ sinϕ ∂ cos θ cos ϕ ∂ ∂ + sinθ cosϕ ∂r r sinθ ∂ϕ r ∂θ ∂r

+

cos θ cos ϕ ∂ cos θ cos ϕ ∂ cos θ cos ϕ ∂ sinϕ ∂ − r ∂θ r ∂θ r ∂θ r sin θ ∂ϕ



sin ϕ ∂ ∂ sin ϕ ∂ cosθ cosϕ ∂ − sin θ cos ϕ r sin θ ∂ϕ ∂r r sin θ ∂ϕ r ∂θ

+

sin ϕ ∂ sinϕ ∂ ∂ ∂ + sin θ sin ϕ sinθ sinϕ r sin θ ∂ϕ r sinθ ∂ϕ ∂r ∂r

+ sin θ sin ϕ

∂ cos θ sinϕ ∂ ∂ cosϕ ∂ + sinθ sinϕ ∂r r ∂θ ∂r r sin θ ∂ϕ

+

cos θ sin ϕ ∂ ∂ cos θ sinϕ ∂ cos θ sin ϕ ∂ + sin θ sin ϕ r ∂θ ∂r r ∂θ r ∂θ

+

cos θ sin ϕ ∂ cos ϕ ∂ cos ϕ ∂ ∂ + sinθ sinϕ r ∂θ r sinθ ∂ϕ r sinθ ∂ϕ ∂r

+

cos ϕ ∂ cos θ sinϕ ∂ cos ϕ ∂ cosϕ ∂ + r sin θ ∂ϕ r ∂θ r sinθ ∂ϕ r sin θ ∂ϕ

+ cos θ +

∂ ∂ ∂ sinθ ∂ sinθ ∂ ∂ − cos θ − cos θ cosθ ∂r ∂r ∂r r ∂θ r ∂θ ∂r

sin θ ∂ sin θ ∂ r ∂θ r ∂θ

16 363

Vector Calculus

Definition of second-order derivative, complemented by realization that partial differentiation is concerned with only functions of the independent variable under scrutiny support conversion of Eq. (16.363) to ∇2 = sin2 θ cos2 ϕ

∂2 ∂ 1 ∂ ∂ 1 ∂ − sin ϕ cos ϕ + sinθ cos θ cos2 ϕ 2 ∂r ∂r r ∂θ ∂r r ∂ϕ

+

cos θ cos2 ϕ ∂ ∂ cos θ cos2 ϕ ∂ ∂ sin θ + cos θ r ∂θ ∂r r2 ∂θ ∂θ



cos θ sin ϕ cos ϕ ∂ 1 ∂ sin ϕ ∂ ∂ cos ϕ − r2 ∂θ sin θ ∂ϕ r ∂ϕ ∂r



cos θ sin ϕ ∂ ∂ sin ϕ ∂ ∂ ∂2 + 2 2 + sin2 θ sin2 ϕ 2 cos ϕ sin ϕ 2 ∂r r sin θ ∂ϕ ∂θ ∂ϕ r sin θ ∂ϕ

+ sin θ cos θ sin2 ϕ

∂ 1 ∂ ∂ 1 ∂ + sinϕ cos ϕ ∂r r ∂θ ∂r r ∂ϕ

+

cos θ sin2 ϕ ∂ ∂ cos θ sin2 ϕ ∂ ∂ + sin θ cos θ r ∂θ ∂r r2 ∂θ ∂θ

+

cos θ sin ϕ cos ϕ ∂ 1 ∂ cos ϕ ∂ ∂ sin ϕ + r2 ∂θ sin θ ∂ϕ r ∂ϕ ∂r

+

cos θ cos ϕ ∂ ∂ cos ϕ ∂ ∂ + 2 2 sin ϕ cos ϕ r 2 sin θ ∂ϕ ∂θ ∂ϕ r sin θ ∂ϕ

+ cos2 θ

,

∂2 ∂ 1 ∂ sin θ ∂ ∂ sin θ ∂ ∂ − sin θ cos θ cos θ sin θ − + 2 2 ∂r ∂r r ∂θ r ∂θ ∂r r ∂θ ∂θ

16 364 where common factors were already dropped from numerator and denominator; the rule of differentiation of a product may now be called upon to reach ∇2 = sin2 θ cos2 ϕ

∂2 1 ∂ 1 ∂2 + + sin θ cos θ cos2 ϕ − 2 2 r ∂θ r ∂r∂θ ∂r

− sin ϕ cos ϕ −

2

cos θ cos ϕ ∂ ∂ cos θ sin ϕ cos ϕ + cos θ 2 − − sin θ 2 r ∂θ r2 ∂θ 2

+

1 ∂ 1 ∂2 cos θ cos2 ϕ ∂ ∂2 + cos θ + sin θ + r 2 ∂ϕ r ∂r∂ϕ r ∂r ∂θ∂r −

cos θ ∂ sin2 θ ∂ϕ

+

1 ∂2 sin θ ∂θ∂ϕ



sin ϕ ∂ ∂2 cosθ sin ϕ ∂ ∂2 − sin ϕ + cos ϕ + cos ϕ − 2 − sin ϕ r ∂r r sin θ ∂θ ∂ϕ∂r ∂ϕ∂θ

+

sin ϕ ∂ ∂2 ∂2 2 2 cos ϕ + sin ϕ θ sin ϕ + sin ∂ϕ ∂r 2 r 2 sin2 θ ∂ϕ2

725

726

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

+ sin θ cos θ sin2 ϕ −

1 ∂ 1 ∂2 1 ∂ 1 ∂2 + + + sin ϕ cos ϕ − r 2 ∂θ r ∂r∂θ r 2 ∂ϕ r ∂r∂ϕ

+

cos θ sin2 ϕ ∂ ∂2 cosθ sin2 ϕ ∂ ∂2 cosθ + sin θ + cos θ + − sin θ r ∂r r2 ∂θ ∂θ∂r ∂θ2

+

cos θ sin ϕ cosϕ cos θ ∂ 1 ∂2 cos ϕ ∂ ∂2 + cosϕ + sin ϕ + − r2 r ∂r ∂ϕ∂r sin2 θ ∂ϕ sin θ ∂θ∂ϕ

+

cosθ cos ϕ ∂ ∂2 cos ϕ ∂ ∂2 − sin ϕ + sin ϕ + cos ϕ + cosϕ r 2 sin θ ∂θ ∂ϕ ∂ϕ∂θ r 2 sin2 θ ∂ϕ2

+ cos2 θ

+

∂2 1 ∂ 1 ∂2 sin θ ∂ ∂2 + − sin θ + cosθ − − sin θ cosθ − r 2 ∂θ r ∂r∂θ r ∂r ∂r 2 ∂θ∂r

sin θ ∂ ∂2 + sin θ cosθ r2 ∂θ ∂θ2

16 365

Upon elimination of parentheses – and with the aid of Schwarz’s theorem, Eq. (16.365) becomes ∇2 = sin2 θ cos2 ϕ

∂ 2 sin θ cosθ cos2 ϕ ∂ sin θ cosθ cos2 ϕ ∂ 2 + − r2 ∂θ r ∂r 2 ∂r∂θ

+

sin ϕ cos ϕ ∂ sin ϕ cos ϕ ∂ 2 cos2 θ cos2 ϕ ∂ − + r2 ∂ϕ r r ∂r ∂r∂ϕ

+

sin θ cosθ cos2 ϕ ∂ 2 sin θ cosθ cos2 ϕ ∂ cos2 θ cos2 ϕ ∂ 2 + − r r2 ∂θ r2 ∂r∂θ ∂θ2

+

cos2 θ sin ϕ cosϕ ∂ cos θ sin ϕ cos ϕ ∂ 2 sin2 ϕ ∂ − + ∂ϕ r 2 sin θ r ∂r ∂θ∂ϕ r 2 sin2 θ



sin ϕ cosϕ ∂ 2 cos θ sin2 ϕ ∂ cosθ sin ϕ cos ϕ ∂ 2 − + r r 2 sin θ ∂θ r 2 sin θ ∂r∂ϕ ∂θ∂ϕ

+

sin ϕ cos ϕ ∂ sin2 ϕ ∂ 2 ∂2 2 2 + + sin θ sin ϕ 2 2 2 ∂r 2 r 2 sin θ ∂ϕ r 2 sin θ ∂ϕ



sin θ cos θ sin2 ϕ ∂ sin θ cos θ sin2 ϕ ∂ 2 sin ϕ cos ϕ ∂ + − 2 r ∂θ r r2 ∂ϕ ∂r∂θ

+

sin ϕ cos ϕ ∂ 2 cos2 θ sin2 ϕ ∂ sin θ cos θ sin2 ϕ ∂ 2 + + ∂r∂ϕ ∂r∂θ r r ∂r r



sin θ cos θ sin2 ϕ ∂ cos2 θ sin2 ϕ ∂ 2 sin ϕ cosϕ ∂ + − 2 2 r2 ∂θ r2 r sin θ ∂ϕ ∂θ2

;

16 366

Vector Calculus

+

cos θ sin ϕ cos ϕ ∂ 2 cos2 ϕ ∂ sin ϕ cosϕ ∂ 2 + + r 2 sin θ r ∂r r ∂θ∂ϕ ∂r∂ϕ

+

cos θ cos2 ϕ ∂ cos θ sin ϕ cos ϕ ∂ 2 sin ϕ cos ϕ ∂ + − 2 2 2 r sin θ ∂θ r 2 sin θ ∂θ∂ϕ r sin θ ∂ϕ

+

2 cos2 ϕ ∂ 2 sin θ cos θ ∂ sin θ cos θ ∂ 2 2 ∂ − + cos θ + r2 ∂θ r ∂r 2 ∂r∂θ r 2 sin2 θ ∂ϕ2

+

sin2 θ ∂ sin θ cosθ ∂ 2 sin θ cos θ ∂ sin2 θ ∂ 2 − + 2 + r ∂r r r2 ∂θ r ∂θ2 ∂r∂θ

association of coefficients of identical differential operators is then in order, according to ∇2 =

cos2 θ cos2 ϕ sin2 ϕ cos2 θ sin2 ϕ cos2 ϕ sin2 θ ∂ + + + + r r r r r ∂r −

sin θ cosθ cos2 ϕ sin θ cos θ cos2 ϕ cos θ sin2 ϕ sin θ cos θ sin2 ϕ − − + r2 r2 r 2 sin θ r2



sin θ cosθ sin ϕ cos θ cos ϕ sin θ cos θ sin θ cos θ + + + r2 r 2 sin θ r2 r2

+

+

2

2

sin ϕ cos ϕ cos2 θ sin ϕ cos ϕ sin ϕ cosϕ sin ϕ cos ϕ + 2 2 − + r2 r2 r 2 sin2 θ r sin θ −

+

+ − +

sin θ cos θ sin2 ϕ sin θ cos θ sin θ cos θ − − r r r

∂2 ∂r∂θ

sin ϕ cosϕ sin ϕ cos ϕ sin ϕ cosϕ sin ϕ cosϕ ∂ 2 − + + r r r r ∂r∂ϕ

cos2 θ cos2 ϕ cos2 θ sin2 ϕ sin2 θ ∂ 2 + + 2 r2 r2 r ∂θ2

+ − +

∂2 ∂r 2

sin θ cos θ cos2 ϕ sin θ cos θ cos2 ϕ sin θ cos θ sin2 ϕ + + r r r +

∂ ∂ϕ

cos2 θ sin ϕ cos ϕ sin ϕ cos ϕ − 2 2 r 2 sin2 θ r sin θ

+ sin2 θ cos2 ϕ + sin2 θ sin2 ϕ + cos2 θ

∂ ∂θ

cosθ sin ϕ cos ϕ cos θ sin ϕ cosϕ cos θ sin ϕ cos ϕ cosθ sin ϕ cos ϕ ∂ 2 − + + ∂θ∂ϕ r 2 sin θ r 2 sin θ r 2 sin θ r 2 sin θ

sin2 ϕ cos2 ϕ ∂ 2 + r 2 sin2 θ r 2 sin2 θ ∂ϕ2 16 367

727

728

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

After factoring out as far as possible within each parenthesis, and dropping symmetrical terms along the way, Eq. (16.367) degenerates to cos2 θ sin2 ϕ + cos2 ϕ sin2 ϕ + cos2 ϕ sin2 θ ∂ + + r r ∂r r

∇2 =



sin θ cos θ sin2 ϕ + cos2 ϕ sin θ cos θ sin2 ϕ + cos2 ϕ − r2 r2

+

cos θ sin ϕ + cos ϕ 2 sin θ cos θ + 2 r2 r sin θ

+

2

2

;

∂2 + sin θ sin ϕ + cos ϕ + cos θ ∂r 2 2

2

2

∂ ∂θ

2

+

2 sin θ cosθ sin2 ϕ + cos2 ϕ 2 sin θ cos θ ∂ 2 − r ∂r∂θ r

+

cos2 θ sin2 ϕ + cos2 ϕ sin2 θ ∂ 2 sin2 ϕ + cos2 ϕ ∂ 2 + + r2 r2 r 2 sin2 θ ∂θ2 ∂ϕ2 16 368

application of the fundamental law of trigonometry permits simplification of Eq. (16.368) to sin2 θ + cos2 θ 1 ∂ + r r ∂r

∇2 =

+ −

sin θ cosθ + sin θ cosθ cos θ 2 sin θ cos θ ∂ + + 2 r2 r sin θ r2 ∂θ

∂ 2 2 sin θ cosθ − 2 sin θ cos θ ∂ 2 + sin2 θ + cos2 θ 2 + r ∂r ∂r∂θ +

,

16 369

sin2 θ + cos2 θ ∂ 2 1 ∂2 + 2 2 r2 r 2 sin θ ∂ϕ2 ∂θ

where terms alike were meanwhile collapsed. Application again of the fundamental relationship of trigonometry, followed by cancellation of similar terms support further reduction of Eq. (16.369) to ∇2 =

1 1 ∂ cosθ ∂ ∂2 1 ∂2 1 ∂2 + + 2 + 2+ 2 2+ 2 2 , r r ∂r r sin θ ∂θ ∂r r ∂θ r sin θ ∂ϕ2

16 370

where trivial algebraic manipulation unfolds ∇2 =

2 ∂ cosθ ∂ ∂2 1 ∂2 1 ∂2 + 2 + 2+ 2 2+ 2 2 ; r ∂r r sin θ ∂θ ∂r r ∂θ r sin θ ∂ϕ2

16 371

Vector Calculus

1/r2 or 1/r2sin θ (as appropriate) may now be factored out as ∇2 =

2 1 ∂ 1 ∂ ∂2 1 ∂2 2 ∂ + r + sin θ cos θ 2r + + r2 ∂r r 2 sin θ ∂θ ∂r 2 r 2 sin2 θ ∂ϕ2 ∂θ2

16 372 In view of the rule of differentiation of a product, Eq. (16.372) may be rewritten as ∇2 =

1 ∂ 2∂ 1 ∂ ∂ 1 ∂2 r sin θ + + r 2 ∂r ∂r r 2 sin θ ∂θ ∂θ r 2 sin2 θ ∂ϕ2

16 373

– thus providing ∇2 ≡ ∇2{r, θ, ϕ}, in condensed form.

16.4 Curvature of Three-dimensional Surfaces Let r denote the radius vector, centered at the origin of a Cartesian coordinate system (x,y,z) associated to unit vectors ix, iy, and iz, which parametrically defines a surface S, via (vector) function f, according to f u, v = ix ϕ u, v + iy ψ u, v + iz χ u,v ;

r

16 374

here ϕ, ψ, and χ denote (scalar) functions of two parameters, u and v, such that x = ϕ u, v ,

16 375

y = ψ u,v ,

16 376

z = χ u, v ,

16 377

and

respectively. If Eqs. (16.375) and (16.376) are solved for u and v, and substituted afterward in Eq. (16.377), then one obtains z ≡ z{x, y} – which is the usual way of representing a three-dimensional surface. The unit vector, t, tangent to such a space curve along direction s, is accordingly defined as t=

dr , ds

16 378

along with t = 1;

16 379

the curvature, κ, of said curve is, in turn, defined as κ

dt , ds

16 380

d2 r ds2

16 381

or else κ

729

730

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

after bringing Eq. (16.378) on board. The principal normal, h, of a curve abides to d2 r ds2 , d2 r ds2

h

16 382

and consequently h = 1;

16 383

insertion of Eq. (16.381) transforms, in turn, Eq. (16.382) to h=

1 d2 r κ ds2

16 384

The scalar product of dr/ds by itself is given by dr ds

2

dr dr dr = ds ds ds

dr cos0 ds

16 385

following Eq. (3.53), where combination with Eqs. (16.378) and (16.379) allows simplification to dr ds

2

= t

t 1= t

2

= 12 = 1

16 386

Differentiation of both sides of Eq. (16.386) with regard to s yields d ds

dr ds

2

= 0,

16 387

due to constancy of its its right-hand side – which then implies d ds

dr ds

dr d cos 0 = ds ds

dr ds

2

=2

dr d ds ds

dr ds

dr d dr dr d 2 r =2 =2 =0 ds ds ds ds ds2

=2

dr d ds ds

dr ds

cos 0 ,

16 388 in view of Eq. (16.385), and since Eqs. (10.29) and (10.205) are satisfied by the scalar product as per Eq. (3.53); upon division by 2, Eq. (16.388) becomes dr d 2 r = 0, ds ds2

16 389

or else dr ds

d2 r h=0 ds2

16 390

Vector Calculus

d2 r via Eq. (16.382). If both sides are divided by ds2 Eq. (16.390) yields merely after eliminating

d2 r ds2

t h = 0,

0, then

16 391

also with the aid of Eq. (16.378) – so h is normal to t, and thus to the curve at stake. Finally, the unit vector, n, normal to the aforementioned space curve, is defined by n

∂r ∂r × ∂u ∂v , ∂r ∂r × ∂u ∂v

16 392

and consequently n =1

16 393

The total differential associated with r ≡ r{u,v}, as per Eq. (16.374), looks like dr =

∂r ∂r du + dv, ∂u ∂v

16 394

in general agreement with Eq. (10.6); in view of the definition of t as per Eq. (16.378), one realizes that dr and t share the same direction, so n dr = 0

16 395

– since n is normal to t by hypothesis, and thus to all curves lying on the surface drawn through the point selected. On the other hand, Eq. (16.384) may be rearranged to read d dr ds ds

d2 r = κh ds2

16 396

due to definition of second-order derivative – while insertion of Eq. (16.378) supports transformation to dt = κh; ds

16 397

upon scalar multiplication of both sides by n, Eq. (16.397) becomes dt n = κh n = κ h ds

n cos ∠h, n ,

16 398

again with the aid of Eq. (3.53) and the assumption that κ > 0. As seen above, h and n are both perpendicular to t, so they share the same direction because no constraint has been imposed on s in Eq. (16.378) – which implies ∠h, n = 0; in view of Eqs. (16.383) and (16.393), one concludes dt n = κ 1 1 cos 0 = κ ds

16 399

from Eq. (16.398). For their status of perpendicular unit vectors, t and n abide to t n = 0,

16 400

731

732

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

so differentiation with regard to s unfolds d t n = 0; 16 401 ds application of the general rule of differentiation of a (scalar) product, after taking Eq. (3.53) into account, leads to d t ds

n cos ∠t, n

=

d t ds

n cos ∠t, n + t

d n cos ∠h, n ds

=

d t ds

n cos ∠t, n + t

d n cos ∠h,n ds

=

dt dn =0 n+t ds ds 16 402

as h and n are collinear, and consequently cos {∠h, n} = cos 0 = 1 is a constant as seen above – which may also appear as dn dt =− n = − κ, 16 403 ds ds at the expense of Eq. (16.399). According to Euler, the curvature of an arbitrary normal section, at any given point, may be expressed by the curvatures of two orthogonal sections intersecting the surface at that point (often termed principal sections); to prove this claim, one should consider ds1 and ds2 as the arc differentials of the two principal sections, and ds as the arc differential of an arbitrary normal section at an angle θ with ds1 – as depicted in Fig. 16.4. Generally speaking, if Φ{u,v} is a function of u and v, then t

Φ R −Φ P = Φ R −Φ Q

+ Φ Q −Φ P

16 404

after adding and subtracting Φ{Q} – where division of both sides by ds, and further multiplication and division of the first parenthesis in the right-hand side by ds1 and of the second parenthesis by ds2 unfold Φ R −Φ P Φ R − Φ Q ds1 Φ Q −Φ P ds2 + = ds ds ds ds1 ds2

R

ds

2 ds2

P

θ

ds1 1

Q

16 405

Figure 16.4 Arc differential, ds, along a (normal) section (PR) of a space curve, and decomposition as two arc differentials, ds1 and ds2, along two principal (or orthogonal) curvature sections, (PQ) and (QR), respectively – with θ denoting angle between ds and ds1.

Vector Calculus

If point R is sufficiently close to point P, i.e. ds 0, then Φ{R} − Φ{P} may be replaced by dΦ – and, consequently, ds1 0 and ds2 0 (as concluded from inspection of Fig. 16.4), meaning that Φ{R} − Φ{Q} dΦ and Φ{Q} − Φ{P} dΦ as well. Under these circumstances, Eq. (16.405) should be replaced by dΦ dΦ ds1 dΦ ds2 = + ds ds1 ds ds2 ds

16 406

– where Eqs. (2.288) and (2.290) allow further transformation to dΦ dΦ dΦ = cosθ + sin θ ds ds1 ds2

16 407

On the other hand, Eq. (16.374) supports dr d = ix ϕ u, v + iy ψ u,v + iz χ u,v ds ds

= ix

dϕ u,v dψ u,v dχ u, v + iy + iz ds ds ds 16 408

since ix, iy, and iz are invariant vectors – where insertion of Eq. (16.378), and application of Eq. (16.407) when Φ is set equal to ϕ, ψ, and χ at a time yield t = ix

dϕ dϕ dψ dψ dχ dχ cosθ + sin θ + iy cos θ + sin θ + iz cosθ + sin θ ds1 ds2 ds1 ds2 ds1 ds2

dϕ dψ dχ dϕ dψ dχ cosθ + ix sin θ + iy + iz + iy + iz = ix ds1 ds1 ds1 ds2 ds2 ds2

,

16 409 along with factoring out of cos θ or sin θ (as appropriate); the notation in Eq. (16.409) can be simplified to t = t 1 cosθ + t 2 sin θ,

16 410

provided that t1 and t2 are defined as t1

dr ds1

ix

dϕ dψ dχ + iy + iz ds1 ds1 ds1

16 411

t2

dr ds2

ix

dϕ dψ dχ + iy + iz , ds2 ds2 ds2

16 412

and

respectively. In parallel to Eqs. (16.378) and (16.410) when Eqs. (16.408), (16.411), and (16.412) are taken on board, one finds that dn dn dn = cosθ + sin θ, ds ds1 ds2

16 413

should r be replaced by n; scalar multiplication of Eqs. (16.410) and (16.413) gives rise to t

dn = t 1 cos θ + t 2 sin θ ds

dn dn cos θ + sin θ ds1 ds2

= t1

dn dn dn dn 2 cos2 θ + t 1 sin θ cos θ + t 2 sin θ cos θ + t 2 sin θ ds1 ds2 ds1 ds2

= t1

dn dn 2 dn dn sin θ cos θ cos2 θ + t 2 sin θ + t 1 + t2 ds1 ds2 ds2 ds1 16 414

733

734

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

using its distributive properties as per Eqs. (3.70) and (3.74), as well as Eq. (3.79) – complemented by factoring out of sin θ cos θ, besides recalling Eq. (16.410). In view of Eq. (16.403). In view of Eq. (16.403), one may redo Eq. (16.414) to − κ = − κ1 cos2 θ − κ2 sin2 θ + t 1

dn dn + t2 sin θ cos θ ds2 ds1

16 415

– with curvature components κ1 and κ2 satisfying κ1

− t1

dn ds1

16 416

κ2

− t2

dn , ds2

16 417

and

respectively, and for which Eq. (16.403) served as template. In view of Eq. (16.400) – and after realizing that t lies on the same plane as t1 and t2 as per Eq. (16.410), one concludes that t1 n = 0

16 418

and likewise t 2 n = 0;

16 419

differentiation of Eqs. (16.418) and (16.419) with regard to s2 and s1, respectively, leads to d t1 n = 0 ds2

16 420

d t2 n = 0 ds1

16 421

and

– where the rule of differentiation of a scalar product transforms Eq. (16.420) to t1

dn dt 1 +n = 0, ds2 ds2

16 422

and Eq. (16.421) likewise to t2

dn dt 2 +n =0 ds1 ds1

16 423

Since directions 1 and 2 were initially taken as orthogonal (see Fig. 16.4), one realizes that t1 and t2 must be orthogonal to each other as per Eqs. (16.411) and (16.412) – thus implying that t1 ≡ t1{s1} and t2 ≡ t2{s2}; therefore, dt 1 = 0, ds2

16 424

which simplifies Eq. (16.422) to t1

dn =0 ds2

16 425

Vector Calculus

– and similarly dt 2 = 0, ds1

16 426

which reduces Eq. (16.423) to t2

dn =0 ds1

16 427

Insertion of Eqs. (16.425) and (16.427) supports simplification of Eq. (16.415) to κ = κ1 cos2 θ + κ 2 sin2 θ,

16 428

where negatives of both sides were meanwhile taken; a radius of curvature, R, may hereafter be defined as R

1 , κ

16 429

for convenience of handling. Consider now another normal direction, at an angle θ + π/2 rad with direction 1 where ds1 was originally overlaid (see Fig. 16.4); since its angle with the direction of ds will accordingly be π/2 rad, its arc differential will be labeled as ds⊥. Equation (16.428) may then be invoked to write κ ⊥ = κ 1 cos2 θ +

π π + κ2 sin2 θ + , 2 2

16 430

since κ 1 and κ 2 are independent of θ as per Eqs. (16.416) and (16.417), combined with Eqs. (16.411) and (16.412) – or else κ ⊥ = κ 1 cos θ +

π 2

2

+ κ2 sin θ +

π 2

2

= κ1 − sin θ 2 + κ2 cos θ 2 ,

16 431

after recalling the complementarity of sine and cosine; ordered addition of Eqs. (16.428) and (16.431) produces κ + κ ⊥ = κ 1 cos2 θ + κ1 sin2 θ + κ 2 sin2 θ + κ2 cos2 θ = κ 1 sin2 θ + cos2 θ + κ2 sin2 θ + cos2 θ

,

16 432

together with lumping of terms alike – where combination with Eq. (2.442) permits simplification to κ + κ⊥ = κ1 + κ2

16 433

Inspection of Eq. (16.433) indicates that the sum of curvatures of two orthogonal normal sections is constant, and equal to the sum of curvatures of (any set of ) principal sections; this is usually known as Euler’s theorem. Insertion of Eq. (16.429) permits reformulation of Eq. (16.433) to 1 1 1 1 + = + ; R R⊥ R1 R2

16 434

this form of Euler’s theorem, explicit on curvature radii, is more frequently found in engineering practice.

735

736

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

In the case of a plane curve, defined parametrically by x

x w

16 435

y

y w

16 436

and

en lieu of Eqs. (16.375)–(16.377), one realizes that Eq. (16.380) may be rewritten as κ

dΨ ds

16 437

– where Ψ ≡ Ψ {w} denotes angle formed by evolving vector t as per Eq. (16.380), and s ≡ s{w} (still) denotes arc length; this is consistent with angle θ in Fig. 13.7, formed by the normals to the curve at each point. Recalling Eqs. (2.3) and (13.89), one may redo Eq. (16.437) as κ

dΨ dx 2 + dy

2

,

16 438

where division of both numerator and denominator by dw>0 unfolds

κ=

dΨ dw dx 2 + dy

2

=

dΨ dw dx 2 + dy dw

dw

2

dΨ dw

=

2

dx dw

2

dy + dw

2

16 439

– together with placing of dw under the root and modulus signs; furthermore tan Ψ

dy dy dw = dx dx dw

16 440

can be obtained via parametric differentiation following Eq. (10.256), and with the aid of Fig. 10.2. Differentiation of tan Ψ with regard to w, via intermediate variable Ψ , produces d tan Ψ d tan Ψ dΨ dΨ = = sec2 Ψ dw dΨ dw dw

16 441

at the expense of Eqs. (10.143) and (10.205) – where isolation of dΨ /dw, coupled with Eq. (2.471), leads to dΨ 1 d tan Ψ 1 d tan Ψ = = ; 2 2 dw sec Ψ dw 1 + tan Ψ dw

16 442

combination with Eq. (16.440) then transforms Eq. (16.442) to 1

dΨ = dw 1+

dy dw dx dw

2

d tan Ψ dw

16 443

Vector Calculus

Alternatively, differentiation of both sides of Eq. (16.440), with regard to w, entails d dy dx dy d dx dx d 2 y dy d 2 x − − dw dw dw dw dw dw dw dw2 dw dw2 = = dx 2 dx 2 dw dw

dy d tan Ψ d dw = dw dw dx dw

16 444 along the lines set forth by Eq. (10.138); insertion of Eq. (16.444) supports transformation of Eq. (16.443) to dx d 2 y dy d 2 x dx d 2 y dy d 2 x − − dΨ 1 dw dw2 dw dw2 = dw dw2 dw dw2 , = 16 445 dw dy 2 dx 2 dx 2 dy 2 + dw dw dw 1 + dw dx dw together with lumping of fractions. After Eq. (16.445) has been inserted, Eq. (16.439) becomes

κ=

dx d 2 y dy d 2 x − dw dw2 dw dw2 dx 2 dy 2 + dw dw dx dw

2

2

dy + dw

=

dx d 2 y dy d 2 x − dw dw2 dw dw2 dx dw

2

+

dy dw

3 2 2

,

16 446

where the denominators were pooled together – or else

κ=

dx d 2 y dy d 2 x − dw dw2 dw dw2 dx 3 dw dy dw 1+ dx dw

2

3 2

=

dx d 2 y dy d 2 x − dw dw2 dw dw2 dx 3 dw

1+

2

dy dw dx dw

3 2 2

,

16 447

after division of both numerator and denominator by (dx/dw)3; Eq. (16.447) degenerates to Eq. (13.156) as expected, after taking Eqs. (10.256) and (10.262) into account.

16.5 Three-dimensional Integration One should finally mention the important result conveyed by Gauss and Ostrogradskii’s (divergence) theorem: if V is a closed region in space, bounded by surface S, then ∇ F dV = V

n F dS, S

16 448

737

738

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where F denotes a general (continuous) vector function and n denotes the outwardly directed normal unit vector at each position on surface S. Equation (16.448) basically states that the integral of the divergence of a vector field, throughout some (closed) volume, equals the integral of the normal flux through the closed surface surrounding said volume. In its full generality, the above theorem is not easy to prove; however, most domains V of practical interest in process engineering are convex – i.e. for every two points belonging to V, the straight segment connecting them contains only points belonging also to V. Under such circumstances – and supposing that F may be represented as F

ix Fx + iy Fy + iz Fz

16 449

using Eq. (16.1) as template, where Fx, Fy, and Fz denote the x-, y-, and z-components of F, the above theorem may be proven for each of the three vector fields separately (yet in a similar fashion); for instance, Eq. (16.448) looks like ∂Fz dV = V ∂z

n iz Fz dS

16 450

S

in view of Eqs. (16.14) and (16.449), pertaining to the vector field in the z-direction. Owing to its hypothesized convex nature, V can be defined as V

V f1, z x,y ≤ z ≤ f2, z x,y ,

x,y, z

x, y

R

;

16 451

the projection of its outer bounding surface, S, onto the x0y plane will hereafter be designated as R. Therefore, a line erected from within R, with direction given by unit vector iz, intersects S at two points – one on a lower surface S1 (with n1 playing the role of unit vector normal thereto) and the other on the upper surface S2 (with n2 playing then the role of unit vector normal to the latter); such surfaces are formally defined as S1

x, y, z

V z = f1, z x, y ,

x, y

R

S2

x, y, z

V z = f2, z x,y ,

x, y

R

16 452

and ,

16 453

consistent with Eq. (16.451) – while abiding to S1 S2 = S

16 454

S1 S2 = Ø

16 455

and After redoing the left-hand side of Eq. (16.450) as ∂Fz dV = V ∂z

∂Fz dzdydx = V ∂z

f2, z R

f1, z

∂Fz dz dydx, ∂z

16 456

one may retrieve Eq. (11.160) to write ∂Fz dV = V ∂z

Fz R

=

f2, z f1, z

dydx

Fz x, y, f2, z x, y R

; − Fz x, y, f1, z x,y

dydx

16 457

Vector Calculus

Eq. (11.102) supports, in turn, transformation of Eq. (16.457) to ∂Fz dV = V ∂z

Fz x, y, f2, z x, y

dydx −

Fz x, y, f1, z x, y

R

dydx,

R

16 458 upon plain extrapolation from one to two dimensions. For the upper surface S2, one realizes that dydx = cos θ2 dS 2 = iz n2 dS 2

16 459

serves as descriptor of its projection onto R – in view of iz = n2 = 1 coupled with Eq. (3.53), since normal vector n2 makes an (acute) angle θ2 with iz; by the same token, dydx = − cos θ1 dS 1 = − iz n1 dS 1

16 460

pertains to the lower surface S1 – as normal vector n1 makes, in this case, an (obtuse) angle θ1 with iz. Consequently, Eq. (16.458) may be reformulated to ∂Fz dV = V ∂z

Fz iz n2 dS 2 −

Fz − iz n1 dS 1

S2

S1

Fz iz n2 dS 2 +

=

,

16 461

Fz iz n1 dS 1

S2

S1

after taking Eqs. (16.459) and (16.460) into account; in view of Eqs. (3.58), (11.220), and (16.454), one may reformulate Eq. (16.461) to ∂Fz dV = V ∂z

Fz iz ndS = S1 + S2

Fz iz ndS = S

n iz Fz dS

16 462

S

– thus guaranteeing validity of Eq. (16.450). A similar rationale may now be applied to the other directions, viz. ∂Fy dV = V ∂y

n iy Fy dS

16 463

S

encompassing the y-direction, as well as ∂Fx dV = V ∂x

n ix Fx dS

16 464

S

for the x-direction. Ordered addition of Eqs. (16.450), (16.463), and (16.464) gives rise to ∂Fx dV + V ∂x n

= S

∂Fy dV + V ∂y

ix Fx dS +

n

∂Fz dV V ∂z

,

iy Fy dS +

S

n

16 465

iz Fz dS

S

where the double integrals in the right-hand side may be collapsed as per Eq. (11.218), and the triple integrals in the left-hand side may, by analogy, also be pooled together to yield

V

∂Fx ∂Fy ∂Fz dV = + + ∂x ∂y ∂z

n ix Fx + n iy Fy + n iz Fz S

dS

16 466

739

740

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– in view of a common domain of integration; the distributive property of the scalar product conveyed by Eq. (3.70) permit one to write ∇ F dV =

n ix Fx + iy Fy + iz Fz dS

V

16 467

S

with the aid also of Eq. (16.14), (16.16), and (16.449) to redo the left-hand side – whereas supplementary insertion of Eq. (16.449) finally retrieves Eq. (16.448). If only one dimension is under scrutiny, then ∇ F reduces to dF/dx, and the triple integral on volume V reduces to a single integral along, say, the x-axis – whereas the double integral on the outer surface reduces to the extreme two points, say, x1 and x2; Eq. (16.448) accordingly simplifies to ∇ F dV = V

∇ F x

x2

yz

dx

dF dx = x1 dx

F x2

dF = F x2 − F x1 ,

F x1

16 468 with the aid of Eq. (11.21) – which is equivalent to Eq. (11.160). On the other hand, Gauss’ theorem can be extended to surfaces such that lines parallel to the coordinate axes meet them in more than two points – by subdividing the region into subregions, with bounding surfaces satisfying the convexity condition.

741

17 Numerical Approaches to Integration

In numerical analysis, numerical integration entails a broad family of algorithms aimed at calculating the numerical value of a definite integral – and, by extension, aimed at finding the solution of a differential equation. The term numerical quadrature (often abbreviated to quadrature) is essentially a synonym for numerical integration, especially when onedimensional integrals are under scrutiny; by the same token, numerical integration in more than one dimension often responds to the term cubature. Numerical integration as such appeared first in 1915 by the hand of David Gibb, a Scottish mathematician and astronomer; quadrature is a historical mathematical term referring to calculation of an area. In fact, mathematicians in Ancient Greece understood calculation of an area as the process of geometrically constructing a square having the same area – thus justifying the said etymology. Quadrature of a parabola segment was achieved by Archimedes, while ancient Babylonians resorted to the trapezoidal rule to integrate the motion of Jupiter along its ecliptic. There are several reasons to carry out numerical integration: (i) the integrand may be known only at certain points, as is typically the case of (discrete) experimental sampling; (ii) a continuous formula for the integrand may be known, but the corresponding integral cannot be found; and (iii) it may be possible to find the corresponding integral, but it is much easier to perform numerical integration than computing the (otherwise) exact integral.

17.1 Calculation of Definite Integrals The nuclear problem in numerical integration is to compute an approximate solution to b a definite integral, say, a f x dx, to a predefined level of accuracy; should f{x} behave smoothly over a bounded domain, then several methods are available to approximate the integral to the aforementioned level. The simplest process of quadrature pertains to a rectangle, with sides a and b, and consists of constructing a square of side ab (i.e. the geometric mean of a and b), with area 2

ab equal to the area ab of said rectangle; this may be done geometrically as highlighted in Fig. 17.1. Since points A, B, and C lie, by hypothesis, on the circle of diameter [AC] with length a + b, one realizes that the amplitude of angle ∠ABC is π/2 rad – since this is half of π rad, i.e. the amplitude of ∠AOC; therefore, [ABC] is a right triangle, Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

742

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Figure 17.1 Graphical description of antique method of quadrature, via square of side c with area equivalent to that of rectangle of sides a and b, with points A, B, and C lying on a semicircle, and points O (center) and D lying on a diameter thereof.

B

c b

a A

O

D

C

besides [ABD] and [BCD] being also right triangles – since segment [BD] is perpendicular to [AC]. Application of Pythagoras’ theorem as per Eq. (2.431) unfolds 2

2

AD + BD = AB

2

17 1

encompassing triangle [ABD], 2

2

BD + CD = BC

2

17 2

pertaining to triangle [BCD], and 2

2

AB + BC = AC

2

17 3

referring to triangle [ABC], as per the labels utilized in Fig. 17.1; Eqs. (17.1)–(17.3) may be redone to 2

2

AB = a2 + BD , 2

2

2

2

17 4

BC = BD + b2 ,

17 5

and AB + BC = a + b 2 ,

17 6

respectively – whereas insertion of Eqs. (17.4) and (17.5) transforms Eq. (17.6) to a2 + BD

2

2

+ BD + b2 = a + b

2

17 7

Expansion of the right-hand side of Eq. (17.7) with the aid of Eq. (2.237), complemented with condensation of terms alike in the left left-hand side lead to 2

a2 + 2BD + b2 = a2 + 2 a b + b2 ,

17 8

where cancellation of common terms between sides reduces Eq. (17.8) to 2

2BD = 2ab;

17 9

isolation of BD finally yields BD = ab

17 10

Numerical Approaches to Integration

– i.e. the height of triangle [ABC], or length c of segment [BD], represents the geometric mean of the lengths of segments [AD] and [CD], i.e. a and b, respectively. A similar geometrical construction solves the problems of quadrature for a parallelogram and a triangle – yet quadrature of curvilinear figures is a far more difficult task. Most methods designed for numerical integration consist of evaluating the integrand at a finite number of points of the integration domain (called integration points), and then calculating a weighed sum of the said values as approximant to the definite integral; their positions and weights depend on the specific method elected, as well as the degree of accuracy envisaged. The behavior of the approximation error as a function of the number of integrand evaluations is, therefore, a critical issue in numerical analysis; a method is considered superior when it requires fewer integration points, because more evaluations of the integrand imply a larger number of arithmetic operations, and consequently a higher (cumulative) roundoff error – besides the longer computation time required thereby. The most frequent quadrature rules utilize interpolating functions that are easy to integrate – thus justifying choice of polynomials; however, higher order polynomials tend to oscillate wildly, so selection is often restricted to low-degree polynomials, typically linear or (at most) quadratic ones. 17.1.1

Zeroth Order Interpolation

The simplest approach to quadrature is to let the interpolating function be constant – in which case a zeroth-order (interpolating) polynomial will be at stake, associated with a single quadrature point. The value of the function at one of the extremes of the integration interval will then be required, viz. b

f x dx ≈ b− a f a

17 11

a

– also known as initial endpoint method; or may instead resort to b

f ζ dζ ≈ b − a f b ,

17 12

a

often referred to as final endpoint method. The two approaches are illustrated in Figs. 17.2a and b, respectively. This rectangular method yields typically poor estimates for the integral, especially if the function varies considerably along its integration interval; this is apparent from the two disparate values for the area of the shaded rectangles in those figures. A more common methodology uses the midpoint rule; evaluation of f{x} should accordingly take place at half the distance between a and b, or (a + b)/2, according to b a

f x dx ≈ b− a f

a+b 2

17 13

– as illustrated in Fig. 17.2c. The missing area below that of the corresponding trapezoid on the right to (a + b)/2 partially compensates for the extra area above the associated trapezoid on the left – so this method is far more accurate than either rectangular method mentioned so far.

743

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

(b)

f{b}

f{b}

f{x}

(a)

f{x} f{a}

0

f{a}

a

b

x (c)

0

a

b

x

f{b}

f{(a+b)/2}

f{x}

744

f{a}

a

0

b

(a+b)/2 x

Figure 17.2 Graphical representation of rectangular quadrature ( [a,b], using (a) a, (b) b, or (c) (a + b)/2 as (single) quadrature point.

) of function f{x}, within interval

To evaluate the quadrature error of f via the midpoint method, one should take advantage of the primitive function F as per Eq. (11.159), i.e. b

F x

x=b

F b =

f ζ dζ,

17 14

a

and expand the integral of f {ζ}, between a and b, via Taylor’s series around x = a, truncated after the cubic term – according to b

f ζ dζ = F x

a

a

dF x dx

b−a

+ O1 =f x

+

a

b −a + a

d2 F x dx2

a

b− a 2

2

+

d3 F x dx3

a

b−a 3

3

4

b−a +

, df x dx

a

b−a 2

2

2

+

d f x dx2

a

b− a 6

3

+ O1

b−a

4

17 15

Numerical Approaches to Integration

since F x

F a = 0, is view of Eqs. (11.133) and (17.14), as well as dF{x}/dx = f{x} as

a

per Eq. (11.159); here O1{(b − a)4} denotes Lagrange’s remainder, proportional to (b − a)4 in agreement with Eq. (12.57). The value of f{x} evaluated at x = (a + b)/2 may, in turn, be coined as df x b−a + a+ −a f x a+b f x b −a = f x a+ 2 a dx a 2 2

+

b− a −a a+ 2

d2 f x dx2

2

a

2

, +O

or else f x

=f x a+b

a

2

+

b−a d f x + dx2 a 2

a

b− a −a 2

2

b −a 2 2

2

df x dx

a+

17 16

3

3

b −a 2

+O

17 17

2

df x b − a d 2 f x b−a + + O b− a 3 a dx a 2 dx2 a 8 following Taylor’s expansion around x = a, truncated after the quadratic term – with Lagrange’s remainder now being of third order, i.e. O{(b − a)3}; multiplication of both sides of Eq. (17.17) by b – a yields =f x

b−a f x

a+b 2

+

=f x

a

b−a +

df x dx

a

b− a 2

2

+

d2 f x dx2

b−a 8

a

3

b−a

+ O2

4

,

17 18 where (b − a)O{(b − a)3} will hereafter be denoted as O2{(b − a)4}. Ordered subtraction of Eq. (17.18) from Eq. (17.15) unfolds ε

b

f ζ dζ − b −a f x

=f x a+b

a

+ O1

2

b− a

4

−f x

a

b−a −

a

df x dx

b−a +

df x dx 2

a

a

b−a 2

b−a d f x − 2 dx2

2

+

d2 f x dx2

a

b −a 6

3

b −a

4

3

2

a

b− a −O2 8

17 19 – where ε denotes error arising from use of b − a f x a + b as estimator of 2 Eq. (17.19) breaks down to merely d2 f x 1 1 − b− a 3 + O1 b− a 4 − O2 b − a 4 ε= 2 dx a 6 8

b a f

ζ dζ;

17 20 d2 f x b− a 3 4 + O b− a = dx2 a 24 upon cancellation of symmetrical terms, and lumping O1{(b − a)4} and O2{(b − a)4} as O{(b − a)4}. Inspection of Eq. (17.20) indicates that the underlying error is of the order of (b − a)3, i.e. ε≈O b−a

3

,

17 21

745

746

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

since terms in (b − a)4, belonging to pseudo-Lagrange’s remainder, are likely overridden by terms in (b − a)3 preceding it (at least for b not too distant from a). In view of Eq. (17.21), one should more accurately rephrase Eq. (17.20) as ε=

d2 f x dx2

b− a 3 d 2 f x − 3 dx2

ζ1

ζ2

b −a 3

3

17 22

for consistency with the concept of Lagrange’s remainder as conveyed by Eq. (12.57), with ζ1 (in general) not required to coincide with ζ2 – despite both belonging to [a,b]; unfortunately, this corrected rationale does not permit a trustworthy estimation of the error. In fact, the chief difficulty in estimating the difference between the exact value, b a+b f ζ dζ, and its midpoint rule approximation, b− a f as per Eq. (17.13), arises 2 a from the latter having no integral in its formal expression (unlike the former) – so some improvement will be expected if the second term is replaced by an integral form. Toward this deed, one should first realize that the tangent to curve f{x} at c as midpoint of the interval, i.e. a+b c , 17 23 2 should generally be described by a0 + a1 x

P1 x

17 24

following Eq. (13.1); here parameters a0 and a1 are to be found from simultaneous solution of f x

c

= a0 + a1 c,

17 25

since the point of tangency belongs obviously to the original curve – coupled with a1 =

df x dx

,

17 26

c

accounting for the geometrical definition of derivative as per Fig. 10.2. Insertion of Eq. (17.26) converts Eq. (17.25) to a0 = f x

c



df x dx

17 27

c, c

so combination with Eqs. (17.26) and (17.27) supports reformulation of Eq. (17.24) to P1 x = f x – with

df x dx

c



df x dx

c + c

df x dx

x

17 28

c

being conveniently factored out as c

P1 x = f x

c

+

df x dx

x −c ;

17 29

c

hence, integration of P1{x}, between a and b, unfolds b

b

P1 x dx = f x a

c

dx + a

df x dx

b

x− c dx c

a

17 30

Numerical Approaches to Integration

with the aid of Eq. (11.102) – or else b

df x P1 x dx = f x x + a c dx a =f x

c

= b −a f x

c

c

df x dx

b −a +

b

x2 −cx 2

b

a

b2 a2 − cb − −ca 2 2

c

b2 − a2 −c b −a 2

+

df x dx

17 31

c

upon elementary algebraic rearrangement, where Eq. (11.160) was duly recalled. Equation (17.31) degenerates to b

P1 x dx = b −a f x

c

a

+ b −a

a+b df x −c dx 2

df x = b− a f x + b − a c− c c dx

c

,

= b− a f x c

17 32

c

after realizing that b − a = (b − a)(b + a) and factoring b − a out afterward – besides taking Eq. (17.23) into account. In view of Eqs. (11.102), (17.23), and (17.32), one may revisit Eq. (17.19) as 2

ε=

b

2

f ζ dζ − b −a f x

a

b

=

c

b

f ζ dζ −

a

P1 ζ dζ =

a

b

f ζ −P1 ζ dζ;

a

17 33 f{x} may now be expanded around x = c, rather than around x = a as per Eq. (17.15), and truncated after the linear term according to f x =f x

c

+

df x dx

d2 f x dx2

x− c + c

ζ

x− c 2 , 2

17 34

with a < ζ < b. One therefore obtains f x − P1 x = f x

=

c

1 d2 f x 2 dx2

+

df x dx x− c

x−c + c

d2 f x dx2

ζ

x− c 2

2

− f x

c

+

df x dx

x −c c

2

ζ

17 35 based on Eqs. (17.29) and (17.34), where cancellation of symmetrical terms meanwhile took place; insertion of Eq. (17.35) supports replacement of Eq. (17.33) by b

ε

f x dx −

1 2

b

P1 x dx = a

a

=

b

b

2

d f x 2 a dx

a

f x − P1 x dx =

b

1 d2 f x 2 a 2 dx

x− c 2 dx ζ

,

x− c 2 dx ζ

17 36

747

748

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

as allowed by Eq. (11.102). According to Eq. (11.140), it is possible to find some value ξ between a and b (with ξ ζ, in general) such that b

d2 f x 2 a dx

x− c 2 dx = ζ

d2 f x dx2

b

x− c 2 dx,

17 37

ξ a

where a convenient change from x to η ≡ x − c as integration variable permits further transformation to b

d2 f x 2 a dx

x− c 2 dx = ζ

b− c

d2 f x dx2

ξ a −c

η2 dη

17 38

– since dη coincides with dx, as per the constancy of (additive) –c; straight application of Eq. (11.160) then yields b

d2 f x 2 a dx

x− c 2 dx = ζ

η3 ξ3

d2 f x dx2

b− c

= a −c

d2 f x dx2

ξ

b − c 3 − a −c 3

3

17 39

In view of Eq. (2.239), one may write b −c 3 − a− c 3 = b3 − 3b2 c + 3bc2 −c3 − a3 + 3a2 c− 3ac2 + c3 , 3

17 40

2

where c may cancel out with its negative, and 3c and 3c (as appropriate) may be factored out to get b − c 3 − a −c 3 = b3 −a3 −3 b2 −a2 c + 3 b− a c2 = b − a b2 + ab + a2 − 3 b + a c + 3c2

17 41

= b − a b2 + ab + a2 + 3 c − b + a c – with b – a meanwhile factored out based on Eq. (2.152), and then 3c factored out once more; recalling Eq. (17.23), one can convert Eq. (17.41) to b −c 3 − a− c 3 = b −a

b2 + ab + a2 + 3

a+b − a+b 2

a+b 2

1 b− a 4b2 + 4ab + 4a2 − 3 a + b a + b 4 1 = b− a 4b2 + 4ab + 4a2 − 3a2 −6ab− 3b2 4

,

=

17 42

along with factoring out of ½ twice – which breaks down to b −c 3 − a− c 3 =

1 1 1 b− a b2 − 2ab + a2 = b− a b −a 2 = b −a 4 4 4

3

17 43

upon extra application of Newton’s binomial formula, following condensation of terms alike. Once in possession of Eq. (17.43), one may redo Eq. (17.39) to b

d2 f x 2 a dx

ζ

d2 f x x− c 2 dx = dx2

1 b− a 4 3 ξ

3

=

b −a 3 d 2 f x 12 dx2

; ξ

17 44

Numerical Approaches to Integration

insertion of Eq. (17.44) allows reformulation of Eq. (17.36) to 1 b −a 3 d 2 f x 2 12 dx2 which degenerates to ε=

ε=

b− a 3 d 2 f x 24 dx2

,

17 45

ξ

17 46 ξ

after pooling factors alike. Although the order of magnitude of the error suggested by Eq. (17.21), i.e. (b – a)3, is confirmed, a finer dependence is found that applies to a duly weighed version of the (local) error. One obvious consequence of Eq. (17.46) is a nil error when f {x} is itself linear – since df {x}/dx would then be constant, and thus d2f {x}/dx2 = 0. The aforementioned approach can be extended to any of the endpoint methods, with Eq. (17.36) replaced by b

ε=

f x a

γ

+

df x dx

x −γ −f x

ζ

b

γ

dx =

df x a dx

ζ

x −γ dx

17 47

in parallel to Eq. (17.34), but upon truncation after the zero-th order term – where γ, as expansion point, should be made equal to either a or b (as appropriate), and P1{x} as per a0 = f x

Eq. (17.24) reduces to P0 x leave dfdxx

γ

; the said Taylor’s expansion will accordingly 2

ζ

f x x− γ for Lagrange’s remainder – instead of 12 d dx 2

ζ

x− c 2 , as in Eq. (17.34).

The mean value theorem for integrals may again be invoked to write b

df x a dx

ζ

x− γ dx =

df x dx

b ξ a

x− γ dx,

17 48

with ξ to be found again somewhere within [a,b]; the fundamental theorem of integral calculus supports, in turn, b

x−γ dx = a

x2 −γx 2

b

= a

b2 a2 − γb − − γa 2 2

17 49

Elimination of parentheses, followed by condensation of terms alike convert Eq. (17.49) to b

x−γ dx = a

b2 a2 b2 −a2 − γb− + γa = − γ b− a , 2 2 2

17 50

where b − a can still be factored out to produce b

x−γ dx = b − a

a

b+a −γ ; 2

17 51

when γ ≡ a, Eq. (17.51) degenerates to b a

x−a dx = b − a

b+a b + a− 2a b−a 1 −a = b−a = b− a = b− a 2 , 2 2 2 2 17 52

749

750

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

applying to the initial endpoint method – and γ ≡ b likewise transforms Eq. (17.51) to b

x− b dx = b −a

a

b+a b + a− 2b a− b 1 −b = b − a = b −a = − b −a 2 , 2 2 2 2 17 53

germane for the final endpoint method. Insertion of either Eq. (17.52) or Eq. (17.53) transforms Eq. (17.48) to b a

df x dx

ζ

x −γ dx =

1 df x b− a 2 2 dx

,

17 54

ξ

or else ε =

1 df x b −a 2 2 dx

17 55 ξ

obtained upon combination of Eqs. (17.47) and (17.55) – and valid for either form of the endpoint method of quadrature. As expected, a constant function yields a nil (average) error – since df{x}/dx in Eq. (17.55) would be nil; and the underlying error is one order of magnitude higher than that of its midpoint counterpart, see (b – a)2 in Eq. (17.55) vis-à-vis with (b – a)3 in Eq. (17.46). 17.1.2

First- and Second-Order Interpolation

A more efficient mode of obtaining quadrature is via utilization of (a truly) interpolating polynomial – of order one or above and with two or more interpolating points, bearing the general form n

f xi Li x ;

Pn x

17 56

i=0

here Li{x} denotes Lagrange’s polynomial, abiding to n

Li x j=0 j i

x− xj ; i = 0, 1,…, n xi −xj

17 57

The integral under scrutiny may consequently be approximated by the integral of Pn{x} as per Eq. (17.56), viz. b

f x dx ≈

a

b n

b

Pn x dx = a

n

f xi Li x dx = a i=0

b

f xi i=0

Li x dx,

17 58

a

obtained with the aid of Eq. (11.102). One may redo Eq. (17.58) as n

b

f x dx = a

Ai f xi

17 59

i=0

– where quadrature coefficients, Ai, are consequently given by b

Li x dx

Ai

17 60

a

One advantage of this method comes from independence of the Ai’s on function f {x} undergoing integration, since Eq. (17.57), and thus Eq. (17.60) unfold a functional

Numerical Approaches to Integration

dependence only on integration points, x0, x1, …, xn; however, the quadrature coefficients must be recomputed if the integration (or interpolation) points are changed. If the aforementioned integration points are equally spaced, then Newton and Cotes’ formula is said to arise – with credit given to Sir Isaac Newton, an English mathematician, astronomer, and physicist, and Roger Cotes, another English mathematician; this type of formulae possess the convenient property of nesting – i.e. the corresponding rule, when each interval is subdivided, includes all current points, thus allowing reutilization of the integrand values (if necessary). Should the intervals between interpolation points be allowed to vary, then Gaussian quadrature formulae are said to arise; although typically more accurate than Newton and Cotes’ rule for the same number of evaluations (in the case of smooth integrands, i.e. susceptible of differentiation to sufficiently high order), they are also more demanding in terms of computer programming. Other quadrature methods with varying intervals include Clenshaw and Curtis’ quadrature (also known as Fejér’s quadrature), and Gauss and Kronrod’s method – which do nest. The latter types will be left out of further discussion, due to their limited utilization in practice. 17.1.2.1 Trapezoidal Rule

The simplest Newton and Cotes’ formula for integration within [a,b] encompasses n = 1, thus leading to two interpolation points, viz. 17 61

a,

x0

and likewise x1

17 62

b;

Eq. (17.57) has it that x−x1 x−b b −x = L0 x 17 63 = x0 − x1 a− b b − a and x−x0 x− a = 17 64 L1 x x1 − x0 b −a – both algebraically manipulated with the aid of Eqs. (17.61) and (17.62). One may now retrieve Eq. (17.60) to write b

b −x 1 dx = b−a a b−a b

L0 x dx =

A0 a

1 b−x =− b −a 2

b

b − x dx = −

a

2 0

02 − b− a =− 2 b−a b −a

2

1 b−a

b

b− x d b− x b− a

,

b − a 2 b −a = = 2 2 b− a

17 65 with the aid of Eqs. (11.160) and (17.63), as well as realization that dx = − d(b − x), b−x

x=a

= b − a, and b − x

x=b

= b − b = 0; one may similarly invoke Eqs. (11.160),

(17.60), and (17.64) to obtain b

b

L1 x dx =

A1 a

=

1 x2 −ax b− a 2

x−a 1 dx = b −a a b −a b

= a

b

x− a dx a

b2 −2ab − a2 −2a2 b2 − 2ab + a2 b − a 2 b −a = A0 = = = 2 2 b −a 2 b− a 2 b− a 17 66

751

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– thus delivering a coefficient identical to that labeled as Eq. (17.65). The (approximate) value of the integral will then read b

f x dx ≈ A0 f x0 + A1 f x1 =

a

b−a b− a f a + f b 2 2

17 67

or, after (b − a)/2 is factored out, b

f x dx ≈

a

b−a f a +f b 2

17 68

– in agreement with Eq. (17.59), and constructed with the help of Eqs. (17.61), (17.62), (17.65), and (17.66); the expression labeled as Eq. (17.68) has classically been termed trapezoidal rule, and is graphically represented in Fig. 17.3a. The reason for such a designation of this quadrature method is the base geometrical figure, a trapezoid, as apparent in Fig. 17.3a; it possesses two vertical sides, defined by x = a on the left and x = b on the right – as well as one horizontal (lower) side defined by y = 0, and one sloped (top) side containing points of coordinates (a,f{a}) and (b,f{b}). According to Eq. (17.56), the interpolating polynomial reads P1 x = f x0 L0 x + f x1 L1 x = f a

b−x x− a +f b , b− a b−a

17 69

along with combination with Eqs. (17.61)–(17.64); the two terms can then be lumped to yield P1 x =

bf a − f a x + f b x−af b , b−a

17 70

f b −f a bf a − af b x+ b− a b −a

17 71

or else P1 x =

(a)

(b) f{b}

f{b} f{(a+b)/2} f{x}

P1{x}

f{x}

752

f{a}

f{a} P2{x}

0

a

x

b

0

a

(a+b)/2 x

b

Figure 17.3 Graphical representation of (a) trapezoidal rule and (b) Simpson’s rule, for interpolation quadrature ( ) of function f{x}, within interval [a,b], using (a) a and b or (b) a, (a + b)/2, and b as quadrature points.

Numerical Approaches to Integration

following isolation of zeroth- and first-order terms in x. Note the coincidence of Eq. (17.71) with Eqs. (13.1), (13.6), and (13.7) – after setting x0 ≡ a and x1 ≡ b, and thus y0 ≡ f {x0} ≡ f {a} and y1 ≡ f {x1} ≡ f {b}; this confirms that the straight line serving as upper bound for the trapezoid in Fig. 17.3a, and described by Eq. (17.71), passes indeed through points of coordinates (a, f {a}) and (b, f {b}), in agreement with Fig. 13.1. The local error of interpolation in the case of a linear interpolant has been derived previously, see Eq. (7.210) – so one may proceed directly to b

1 d2 f x 2 a 2 dx

ε=

x− a x−b dx = ζ

1 2

b

d2 f x 2 a dx

ζ

x−a x −b dx,

17 72

in much the same way Eq. (17.36) was constructed (although with c = a or c = b). Equation (11.140) may again be retrieved to simplify the integral in Eq. (17.72) to b

d2 f x 2 a dx 2

f x – where d dx 2

x− a x−b dx = ζ

d2 f x dx2

b

x− a x− b dx

17 73

ξ a 2

ξ

f x denotes (as usual) the value of d dx at some point ξ belonging to [a,b], 2

but not necessarily coinciding with

d2 f x dx2 ζ

in the kernel (even though a ≤ ξ ≤ b and

a ≤ ζ ≤ b). The integral in the right-hand side of Eq. (17.73) may be computed after application of the distributive property of multiplication of scalars, viz. b

x−a x −b dx =

a

b

x2 − bx− ax + ab dx =

a

b

x2 − b + a x + ab dx;

17 74

a

Eq. (11.160) supports, in turn, b

x− a x− b dx = a

x3 x2 − b+a + abx 3 2 3

b a

2

=

b b a3 a2 − b+a + abb − − b+a + aba , 3 2 3 2

=

1 3 3 1 b − a − b + a b2 − a2 + ab b −a 3 2

17 75

with 1/3, −(b + a)/2, or ab (as appropriate) meanwhile factored out. Since b − a is a factor of both b3 − a3 and b2 − a2, one may redo Eq. (17.75) to b

x− a x− b dx = b −a

a

=

1 2 1 b + ab + a2 − b + a b + a + ab 3 2

b−a 2 b2 + ab + a2 −3 b2 + 2ab + a2 + 6ab 6

17 76

– where elimination of inner parentheses, and lumping of terms alike afterward unfold b

b− a 2b2 + 2ab + 2a2 − 3b2 − 6ab −3a2 + 6ab 6 ; b−a b− a 2 2 2 2 − a + 2ab −b = − a −2ab + b = 6 6

x− a x− b dx = a

17 77

753

754

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Newton’s formula for the square of a sum may finally be retrieved to get b

x− a x−b dx = −

a

b− a b −a b−a 3 a −b 2 = − , b −a 2 = − 6 6 6

17 78

thus allowing transformation of Eq. (17.73) to b

d2 f x 2 a dx

ζ

x− a x−b dx = −

b− a 3 d 2 f x 6 dx2

17 79 ξ

Combination of Eqs. (17.72) and (17.79) gives finally rise to ε= −

1 b− a 3 d 2 f x 2 6 dx2

,

17 80

ξ

or equivalently ε= −

b− a 3 d 2 f x 12 dx2

17 81 ξ

If the concavity of f{x} faces downward, then d2f{x}/dx2 < 0 irrespective of ξ – so Eq. (17.81) entails a positive value for ε; this is consistent with the area of the trapezoid, corresponding b to a P1 x dx, and equal to the product of its height, b − a, by the mean length of its two bases, (f{a} + f{b})/2, as per Eq. (17.68), being lower than the area below the curve, b described by a f x dx, and depicted in Fig. 17.3a. Although the error associated with the trapezoidal method is of the order of (b − a)3 as per Eq. (17.81), as happened with the midpoint method in view of Eq. (17.46), the proportionality constant is smaller 2

1d f x for the rectangle than the trapezoidal method (i.e. ε = 24 dx2

2

ξ

1d f x versus ε = 12 dx2

ξ

)

and of opposite sign (so the trapezoidal method is more conservative); once again, a nil error will be witnessed should function f{x} be described by a straight line. 17.1.2.2 Simpson’s Rule

When n = 2 serves as degree for the interpolation polynomial, one will end up with 3 (equidistant) quadrature points, i.e. x0

a,

17 82

x1

a+b , 2

17 83

x2

b;

17 84

and

Numerical Approaches to Integration

the associated Lagrange’s polynomials now read a+b a+b x− x−x1 x −x2 2 2 x− b = = a + b a −b a+b x0 − x1 x0 −x2 a− −a 2 2 x−

L0 x

L1 x

x− b ,

17 85

b−a

x−x0 x −x2 x− a x −b x− a x− b =− , = a + b a + b a + b a+b x1 − x0 x1 −x2 −a −b −a b− 2 2 2 2

17 86

and

L2 x

x−x0 x −x1 = x2 − x0 x2 −x1

a+b 2 a+b b− 2

x− a

x−

b− a

17 87

– all obtained using Eq. (17.57) as template, followed by combination with Eqs. (17.82)– (17.84). On the other hand, Eq. (17.59) allows one to write b

f x dx = A0 f a + A1 f a

a+b + A2 f b 2

17 88

as per Eqs. (17.82)–(17.84) again; said quadrature formula must be exact when f{x} takes the form of a polynomial of degree up to 2 – in much the same way Eq. (17.81) guaranteed a nil error for the trapezoidal rule in the case of any linear function. One such situation encompasses (the simplest) polynomials 1, x, and x2 – for which Eq. (17.88) will look like b

1dx = A0 1 + A1 1 + A2 1, a b

xdx = A0 a + A1 a

17 89

a+b + A2 b, 2

17 90

and b

x2 dx = A0 a2 + A1 a

a+b 2 + A 2 b2 , 2

17 91

respectively, once more at the expense of Eqs. (17.82)–(17.84); one may reduce Eq. (17.89) to b

A0 + A1 + A2 =

b

dx = x = b− a, a

17 92

a

whereas Eq. (17.90) can be redone as aA0 +

a+b A1 + bA2 = 2

b

xdx = a

x2 2

b

= a

b2 a 2 1 2 2 − = b −a , 2 2 2

17 93

755

756

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

and Eq. (17.91) degenerates to a+b 2 A1 + b2 A2 = a A0 + 2 2

b

b

x3 b3 a3 1 x dx = = − = b3 − a 3 3 a 3 3 3 a 2

17 94

– all following Eq. (11.160). Cramer’s rule, conveyed by Eq. (7.59), permits prompt solution of the set of Eqs. (17.92)–(17.94) in A0, A1, and A2 as unknowns; one indeed obtains b− a b2 − a 2 2 3 b − a3 3 A0 = 1 a a2

1 1 a+b b 2 a+b 2 2 b 2 1 1 a+b b 2 a+b 2 2 b 2

D0 D

17 95

pertaining to A0, complemented by 1

b− a 1 b2 − a 2 a b 2 b3 − a 3 2 b a2 3 A1 = 1 1 1 a+b b a 2 a+b 2 2 a2 b 2

D1 D

17 96

encompassing A1, and likewise 1

1 b− a a + b b2 − a 2 a 2 2 2 3 a + b b − a3 a2 2 3 A2 = 1 1 1 a+b b a 2 a+b 2 2 a2 b 2 regarding A2.

D2 D

17 97

Numerical Approaches to Integration

The common determinant, D, in denominator of Eqs. (17.95)–(17.97) is analogous to the determinant of the matrix in Eq. (6.262) – so Eqs. (6.51) and (6.273) may be taken advantage of to immediately write a2 a+b 2 b2

1

a a+b D= 1 2 1 b

2

2

=

3

,

λj −λi i=1

j=i+1

17 98

λ1 = a a+b 2 λ3 = b

λ2 =

or else a+b a+b −a b −a b − 17 99 2 2 after expansion of the extended products; ½ may then be factored out twice, and terms or factors alike condensed afterward to get 1 1 1 1 D = a + b − 2a b −a 2b −a −b = b −a b − a b − a = b −a 3 2 2 4 4 17 100 On the other hand, Eq. (6.71) supports D=

1 1 1 b+a b+a b D0 = b −a 17 101 2 2 2 2 2 2 b + ba + a b + 2ba + a b2 3 4 after factoring out, and concomitantly dividing the elements of the first column by b – a, and expanding the square of a binomial; whereas Jacobi’s operation may be performed along columns to get 1 0 0 b+a b−a 0 D0 = b −a 2 2 2 2 2 b + ba + a −b + 2ba −a2 2b2 − ba− a2 3 12 3

17 102

via subtraction of the first column from the second, and then from the third one – as allowed by Eq. (6.99). Application of Laplace’s theorem, as per Eq. (6.41), to the first row of the determinant simplifies Eq. (17.102) to

D0 = b −a −1

1+1

0 1

−b2 + 2ba −a2 12

b −a 2 ; 2b2 −ba − a2 3

17 103

the definition of second-order determinant labeled as Eq. (1.10) may, in turn, be invoked to write D0 = − b −a

b −a − b2 + 2ba− a2 b− a b − a = b− a 2 2 12 12

2

=

b−a 4 , 24

17 104

757

758

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

along with Eq. (2.238) for expansion of the square of a difference – complemented with pooling of factors alike. Insertion of Eqs. (17.100) and (17.104) transforms Eq. (17.95) to b−a 4 24 , A0 = 1 b−a 3 4

17 105

or else b−a 17 106 6 after (b − a)3/4 is dropped from both numerator and denominator. With regard to the determinant in the numerator of Eq. (17.96), one has it that A0 =

1

1 1 b+a a b 17 107 D1 = b −a 2 2 2 b + ba + a a2 b2 3 upon factoring b − a out in the second column; the second column may, in turn, be replaced by its sum with the negative of the first column, and similarly with regard to the third column – to eventually get 1

0 0 b−a b−a a , D1 = b −a 2 2 2 b + ba− 2a a2 b2 − a 2 3 where b − a can be factored out in the last column to generate 1

D1 = b −a

2

0 0 b −a 1 a 2 b2 + ba −2a2 a2 b+a 3

17 108

17 109

Laplace’s theorem may now support expansion of the determinant in Eq. (17.109) along its first row, i.e.

D1 = b −a

2

−1

1+1

b−a 1 2 1 2 , b + ba − 2a2 b+a 3

17 110

while a second application of said theorem along the first row unfolds D1 = b −a

2

b− a b2 + ba− 2a2 b+a − = b −a 2 3

2

b2 −a2 b2 + ba− 2a2 − 2 3 17 111

Numerical Approaches to Integration

– with elimination of the inner parenthesis along the way; condensation of terms alike, together with expansion of the square of a binomial transform Eq. (17.111) to − 3a2 − 2b2 −2ba + 4a2 b2 − 2ba + a2 = b−a 2 6 6 2 4 b −a b− a = = b−a 2 6 6 Combination of Eqs. (17.96), (17.100), and (17.112) eventually generates D1 = b − a

2 3b

2

b−a 4 6 , A1 = 1 b −a 3 4 which breaks down to just A1 =

2 b −a 3

17 112

17 113

17 114

after cancellation of common factors between numerator and denominator. Finally, one may expand the determinant appearing in the numerator of Eq. (17.97) as 1

1 1 b+a b+a a D2 = b −a 2 2 2 2 2 b + 2ba + a b + ba + a2 a2 4 3

17 115

– where b − a was factored out of the third column, while the square in the third row and second column was expanded as per Newton’s formula; the second and third elements of the first row may now be made nil via algebraic combination of the second and third columns with the negative of the first column, according to 1

0 0 b−a b−a a D2 = b −a 2 2 2 2 2 b + 2ba− 3a b + ba − 2a2 a2 4 3

17 116

Laplace’s theorem, applied along the first row of the determinant in Eq. (17.116), unfolds

D2 = b −a −1

1+1

b− a b− a 2 2 1 2 , 2 2 b + 2ba− 3a b + ba− 2a2 4 3

17 117

whereas (b − a)/2 can be factored out in the first row as D2 = b −a

1 1 b− a 2 b + 2ba− 3a2 b2 + ba− 2a2 ; 2 4 3

17 118

759

760

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

a further application of Laplace’s theorem, now along the first row, produces D2 =

b−a 2

2

b2 + ba −2a2 b2 + 2ba −3a2 − 3 4

, 17 119 b −a 2 4b2 + 4ba − 8a2 − 3b2 − 6ba + 9a2 b − a 2 b2 − 2ba + a2 = = 2 12 2 12 where elimination of parenthesis and lumping of similar terms took place. Since b2 − 2ba + a2 coincides with the square of b – a, Eq. (17.119) can be simplified to b− a 2 b − a 2 b −a 4 17 120 = ; 2 12 24 constant A2 may thus be obtained from Eq. (17.97), upon insertion of Eqs. (17.100) and (17.120), viz. D2 =

b−a 4 24 , A2 = 1 b−a 3 4 which degenerates to A2 =

17 121

b−a 6

17 122

after both numerator and denominator have been multiplied by 4/(b − a)3. In view of Eqs. (17.106), (17.114), and (17.122), one obtains b

f x dx ≈

a

b−a 2 a+b b −a f a + b− a f f b + 6 3 2 6

17 123

based on Eq. (17.88), or else b a

f x dx ≈

b−a a+b f a + 4f +f b 6 2

,

17 124

once (b − a)/6 has been factored out. Equation (17.124) has classically been known as Simpson’s rule, and is graphically sketched in Fig. 17.3b. Note the existence of three intercepts of the second-degree interpolating polynomial, P2{x}, with the curve representing f {x}, at the extremes of the interval of integration and at its midpoint – besides the notable closeness between P2{x} and f {x} elsewhere (unlike P1{x} and f {x} in Fig. 17.3a), obtained at the expense of the curvature arising from the quadratic nature of the interpolation. Although this method is credited to Thomas Simpson, a British mathematician who lived in the eighteenth century, it appears that Johannes Kepler, a German mathematician and astronomer, used similar formulae almost one century before. Before estimating the error accompanying Eq. (17.124), it is convenient to derive a general expression for the error of an interpolating polynomial, Pn{x}, of degree not above n, constructed with the aid of Lagrange’s polynomials – able to interpolate function f{x} at n + 1 distinct points, x0, x1, …, xn, within interval [a,b]; for every x [a, b], there is indeed a point x = ζ [a, b] such that f x − Pn x =

1 d n + 1f x n+1 dx n + 1

n ζ i=0

x− xi ; n = 1,2, …

17 125

Numerical Approaches to Integration

Although said formula for the deviation between f{x} and Pn{x} as approximant is somewhat reminiscent of the error term associated with Taylor’s expansion truncated after its nth degree term, see Eq. (12.57), one actually finds that Eq. (17.125) has little to do with said expansion. If x = xi (i = 0, 1, …, n), i.e. one of the nodes of interpolation, then Eq. (17.125) must hold; note that both sides will vanish identically, as per the definition of interpolating polynomial (i.e. f {xi} = Pn{xi}), coupled to the nil value of the product when a nil factor, contained in ni= 0 x−xi , is present. For x xi, one may for convenience define two auxiliary functions of a generic independent variable u, according to n

u −xi

W u

17 126

i=0

and Y u

f u − Pn u −

f x − Pn x W u ; W x

17 127

inspection of Eq. (17.127), complemented with Eq. (17.126) and the definition of interpolating polynomial itself as per Eq. (17.125), indicates that Y{u} vanishes at n + 2 distinct points, i.e. at u = x0, x1, …, xn, x. According to a corollary of Rolle’s theorem, see Eq. (10.269) for f {a} = f {b} = 0, if a differentiable function f{x} has n distinct zeros, then its derivative exhibits, at least, n − 1 zeros – corresponding to the points where the graph of f{x} turns around to recross the x-axis. Therefore, dY{u}/du possesses, at least, n + 1 distinct zeros, d2Y{u}/du2 has, at least, n distinct zeros, and so on – until concluding that dn+1Y{u}/dun+1 has, at least, one zero in [a,b], say, u = ζ. Such an ultimate derivative may be expressed at the expense of Eq. (17.127) as d n + 1Y u du n + 1

dn + 1 f x − Pn x f u −Pn u − W u W x du n + 1 d n + 1 f u d n + 1 Pn u f x − Pn x d n + 1 W u − − = W x du n + 1 du n + 1 du n + 1

17 128

– where Eq. (10.122) was duly taken into account; insertion of Eq. (17.126) gives then rise to d n + 1Y u d n + 1 f u d n + 1 Pn u f x −Pn x d n + 1 n = − − u− xi n+1 W x du n + 1 i = 0 du du n + 1 du n + 1 d n + 1 f u d n + 1 Pn u f x −Pn x d n + 1 − − = W x du n + 1 du n + 1 du n + 1

,

u− x0 u− x1 … u − xn 17 129

after making explicit the factors under the extended product. It should be emphasized that the product of factors u − x0, u − x1, …, u − xn in Eq. (17.129) has a single term (and bearing a unit coefficient) in un+1, so all other terms in un, un−1, …, u0 exhibit nil (n + 1)th-order dervatives – as per Eq. (12.78), after recalling Eq. (12.76). Therefore, one may redo Eq. (17.129) to d n + 1Y u d n + 1 f u d n + 1 Pn u f x −Pn x = − − n+1 ; n+1 W x du du n + 1 du n + 1

17 130

761

762

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

for u = ζ, Eq. (17.130) reduces to 0=

d n + 1Y u du n + 1

= ζ

d n + 1f u du n + 1

ζ



d n + 1 Pn u du n + 1

ζ



f x −Pn x n+1 W x

17 131

because ζ is (one of ) the zero(s) of dn+1u/dun+1, as stressed above. Since Pn{x} has (at most) degree n per hypothesis, its (n + 1)th-derivative is necessarily nil, so Eq. (17.131) reduces further to d n + 1f u du n + 1

ζ



f x − Pn x n + 1 = 0, W x

17 132

which may be rephrased as W x d n + 1f u n+1 du n + 1

f x − Pn x =

17 133 ζ

following isolation of f{x} − Pn{x}; insertion of Eq. (17.126) finally retrieves Eq. (17.125), thus proving its validity at large. Note that Eq. (17.125) degenerates to Eq. (7.210) when n = 1 – once x0 and x1 have been replaced by xi and b0, respectively. If f {x} coincides with a second-order polynomial, then the error of replacing f {x} by P2{x} as kernel, for the sake of integration over interval [a,b], will look like b

ε

f x − P2 x dx =

a

b a

1 d2 + 1 f x 2+1 dx2 + 1

2 ζ i=0

x −xi dx

17 134

with the input of Eq. (17.125), and using Eq. (17.36) as template; simplification becomes possible to ε=

1 d3 f x 3 dx3

b

17 135

W x dx ξ a

after taking constant factors off the kernel, recalling Eqs. (11.140) and (17.126) – with ξ (distinct from ζ) being located somewhere between a and b, as inspired by Eq. (17.37). The equally spaced quadrature points x0, x1, and x2 – described by Eqs. (17.82)–(17.84) as per Newton and Cotes’ defining condition, then lead to b

b

W x dx = a

x− a a

x−

a+b 2

x− b dx

17 136

pertaining to the integral term in Eq. (17.135), constructed with the aid of Eq. (17.126); to facilitate calculation, it is convenient to first rewrite that integral as b

W x dx = a

1 2

b

x− a 2x− a− b x− b dx,

17 137

a

after taking ½ off the kernel. One may then set α

2x−a −b

17 138

as auxiliary variable, which implies x=

α+a+b , 2

17 139

Numerical Approaches to Integration

α

x=a

α

x=b

= 2a− a −b = a− b,

17 140

= 2b − a− b = b −a,

17 141

2dx

17 142

and dα

– upon straightforward algebraic calculation, and concomitant application of Eq. (10.1); Eqs. (17.138)–(17.142) support transformation of Eq. (17.137) to b

W x dx = a

1 2

α+a+b α+a+b dα −a α −b 2 2 2

b− a a −b

17 143

– which becomes b

W x dx = a

3 b −a

1 1 2 2

a−b

b −a

1 = 16

a−b

α + a + b −2a α α + a + b − 2b dα 17 144

α + b − a α α + a− b dα

after factoring ½ out three times, and condensing terms and factors alike afterward. Constant b − a may be more easily handled as b −a,

κ

17 145

since such an auxiliary variable κ permits simplification of Eq. (17.144) to b

W x dx = a

1 16

κ −κ

α + κ α α− κ dα =

1 32

κ −κ

2α α2 − κ2 dα

17 146

– where multiplication and division by 2 was meanwhile effected, as well as lumping of α + κ with its conjugate. A second change of variable to β

α2 − κ 2

17 147

is now in order – thus supporting β β

α = −κ α=κ

= − κ 2 − κ 2 = κ 2 −κ 2 = 0,

17 148

= κ2 −κ 2 = 0,

17 149

2αdα,

17 150

and dβ

and allowing conversion of Eq. (17.146) to b

W x dx = a

1 32

0 0

βdβ;

17 151

763

764

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in view of Eq. (11.133), one merely obtains b

W x dx = 0

17 152

a

from Eq. (17.151). Inspection of Eqs. (17.135) and (17.152) indicates that a second degree interpolating polynomial, as approximant to the original function given itself by a second order polynomial, permits exact calculation of the integral of said function – since ε = 0 in Eq. (17.134), consistent with the postulates underlying Eqs. (17.89) and (17.91); when this happens, one can in general add another interpolation point without changing the integral of the interpolant. In fact, if b

b

f x dx = a

17 153

Pn x dx a

is taken as departing postulate, where Pn{x} represents an interpolation polynomial based on n + 1 interpolation points, x0, x1, …, xn, then one may design an (n + 1)th-degree polynomial as Pn + 1 x

Pn x +

f xn + 1 −Pn xn + 1 W x W xn + 1

17 154

at the expense of an extra interpolation point xn+1. Note that Pn+1{xn+1} coincides with f {xn+1}, based on Eq. (17.154) after setting x = xn+1 (as expected) – while W{x} would itself be an (n+1)th-degree polynomial in agreement with Eq. (17.126), as Pn+1{xi} = Pn{xi} + ( f {xn+1} − Pn{xn+1}) 0/W{xn+1} = Pn{xi} = 0, for all i = 0, 1, …, n, due again to the definition of interpolation points. One may therefore rewrite Eq. (17.153) as b

b

f x dx = a

Pn x dx + a

f xn + 1 − Pn xn + 1 0 W xn + 1

f xn + 1 −Pn xn + 1 = Pn x dx + W xn + 1 a b

,

b

17 155

W x dx a

in view of Eq. (17.152); further transformation of Eq. (17.155) is then possible to b

b

f x dx = a

Pn x + a

f xn + 1 − Pn xn + 1 W x W xn + 1

dx,

17 156

at the expense of Eq. (11.102), or else b

b

f x dx = a

17 157

Pn + 1 x dx a

in view of Eq. (17.154). Inspection of Eq. (17.157) vis-à-vis with Eq. (17.153) confirms that the original quadrature does not change by adding an extra arbitrary interpolation point, as initially claimed. For instance, if an extra point – equal to (a + b)/2 (for convenience), is considered, then one may replace Eq. (17.136) by b

b

W x dx = a

x− a a

x−

a+b 2

2

x −b dx

17 158

Numerical Approaches to Integration

that is consistent with Eq. (17.126), thus permitting ε=

b

1 d4 f x 4 dx4

x− a

x−

ξ a

a+b 2

2

17 159

x− b dx

effectively replace Eq. (17.135); note that a third-degree interpolating polynomial (i.e. n = 3) is now at stake. To calculate the integral in Eq. (17.158), one should first factor ¼ out as b

W x dx = a

b

1 4

x −a 2x −a −b

2

17 160

x− b dx,

a

and then retrieve Eqs. (17.138)–(17.142) to get b

W x dx = a

b− a

1 4

a −b

α+a+b α+a+b dα − a α2 −b 2 2 2

17 161

containing auxiliary variable α; this is equivalent to writing b

W x dx = a

1 1 4 2

1 = 32

3 b− a a −b

b− a a −b

α + a + b −2a α2 α + a + b− 2b dα 17 162

α + b − a α2 α + a− b dα

once ½ is factored out three times, and similar terms in parentheses are pooled together. Equation (17.145) may now be invoked to simplify notation in Eq. (17.162) to b

W x dx = a

1 32

κ −κ

α + κ α2 α −κ dα =

1 32

κ −κ

α2 α2 − κ2 dα =

1 32

κ −κ

α4 − κ 2 α2 dα, 17 163

along with application of the distributive property; application of Eq. (11.160) supports transformation to b

W x dx = a

=

1 α5 2 α3 −κ 32 5 3

κ

= −κ

1 κ5 2 κ3 −κ − 32 5 3

−κ 5

5

−κ 2

−κ 3

3

1 κ5 2 κ3 κ5 2 κ3 −κ + −κ 3 5 3 32 5 17 164

Factoring out as appropriate, elimination of parenthesis, and condensation of terms alike convert Eq. (17.164) to b

W x dx = a

1 2κ5 2κ5 2 κ5 κ5 2 3κ5 − 5κ 5 2 2κ 5 = = − − =− 3 32 5 32 5 3 32 15 32 15

4 κ5 4 κ5 4 κ =− =− =− 32 15 15 32 15 2

5

4 b−a =− 15 2

5

17 165

765

766

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– after taking Eq. (17.145) on board as well. Owing to Eqs. (17.126), (17.160), and (17.165), one may rephrase Eq. (17.159) as ε=

b

1 d4 f x 4 dx4

W x dx = ξ a

1 b−a 3 2 2 15 ξ

d4 f x =− dx4

1 d4 f x 4 3 2 dx4

ξ



4 15

b−a 2

5

17 166

5

due to cancellation of 4 between numerator and denominator; Eq. (17.166) will finally appear as ε= −

1 b− a 5 d 4 f x 90 2 dx4

,

17 167

ξ

upon coupling of 3, 2, and 15 as a single factor in denominator. Inspection of Eq. (17.167) indicates that Simpson’s rule is exact for any function represented by a polynomial up to 4

f x order 3 (since d dx 4

ξ

= 0 in such a case); the reason why Simpson’s rule has gained an extra

order, as the error was in principle expected to be proportional to

b− a 2

4

d3 f x , dx3 ξ

has to do

with the fact that the points where the kernel is evaluated (or quadrature points) are symmetrically distributed throughout interval [a,b]. The reasoning behind derivation of Eq. (17.160) from Eq. (17.136), and thus of Eq. (17.159) from Eq. (17.135) may also be applied with n = 0 – in which case one would have gotten b

ε

f x −P0 x dx =

a

b a

1 d0 + 1 f x 0+1 dx0 + 1

0 ζ i=0

x −xi dx =

b

df x a dx

ζ

x−x0 dx, 17 168

after resorting to Eqs. (17.125) and (17.134) as template – where Eqs. (11.140) and (17.126) support further transformation to ε=

df x dx

b

17 169

W x dx, ξ a

with a ≤ ξ ≤ b; if x0

a+b 2

17 170

is set as quadrature point, then the integral in Eq. (17.169) turns to b

b

W x dx = a

a

x−

a+b dx 2

17 171

Application of the fundamental theorem of integral calculus to Eq. (17.171) gives rise to b

W x dx = a

x2 a + b x − 2 2

b

= a

b2 a + b a2 a + b b − a − − 2 2 2 2

1 a+b b− a = b2 − a 2 − 2 2

,

17 172

Numerical Approaches to Integration

together with factoring out of ½ or (a + b)/2 (as appropriate); if b − a is further factored out, then Eq. (17.172) yields b

W x dx = a

b+a a+b − 2 2

b−a = 0

17 173

As before, an extra interpolation point may be added – so Eq. (17.173) indicates that a linear function will produce a nil error for the integral associated therewith, when a zeroth order interpolant is employed. Therefore, the upper bound for the associated 2

f x error will actually be proportional to (b − a)3 and d dx 2

ξ

; this is why the midpoint zeroth

degree method, described by Eq. (17.13), actually conveys a higher order error, as per Eq. (17.46), than expected for a typical zeroth degree method – see Eqs. (17.11) and (17.12), with error given by Eq. (17.55) being proportional to (b − a)2 and

df x dx ξ .

Finally, it should be emphasized that Simpson’s rule can be derived via an alternative route – directly related to error minimization of a linear combination of simpler rules; in other words, one should look for b

f ζ dζ = B0 b− a f

a

a+b b−a f a +f b + B1 2 2

17 174

obtained from Eqs. (17.13) and (17.68), with weighing (constant) factors B0 and B1 to be determined such that the associated (estimated) error is vanishingly small, i.e. ε = B0

b − a 3 d2 f x 24 dx2

ξ

+ B1 −

b −a 3 d 2 f x 12 dx2

ξ

≈0,

stemming from Eqs. (17.46) and (17.81). After factoring out (b − a)3 and

17 175 d2 f x , dx2 ξ

Eq. (17.175) generates B0 B1 − 24 12

b −a

3d

2

f x dx2

=0

17 176

ξ

that readily implies B0 B1 − =0 24 12 – since b d2 f x dx2 ξ

17 177

a (otherwise an integration interval with nil amplitude would result) and

0 (otherwise both simpler quadrature rules would be exact only for linear

functions); Eq. (17.177) breaks down to B0 = 2B1

17 178

After realizing that the weight factors for the values of the function at the quadrature points are to add up to unity, i.e. B0 + B1 = 1, one concludes that 1 B1 = 3

17 179

17 180

767

768

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

from insertion of Eq. (17.178) in Eq. (17.179), followed by isolation of B1 – and, consequently, 2 17 181 3 once Eq. (17.180) is inserted back in Eq. (17.178). Combination with Eqs. (17.180) and (17.181) transforms Eq. (17.174) to B0 =

b

2 a+b 1 b−a b− a f f a +f b + 3 2 3 2

f ζ dζ =

a

= b −a

2 a+b 1 f + f a +f b 3 2 6

= b −a

1 4 a+b 1 + f b f a + f 6 6 2 6

,

17 182

where b − a was factored out to advantage and the distributive property was recalled; a final factoring out of 1/6 will retrieve Eq. (17.124). One therefore concludes that Simpson’s rule – based on approximation of the original function by a second-order interpolation polynomial, may be seen as a linear combination of midpoint and trapezoidal rules, based on zeroth- and first-order interpolation polynomials, respectively; hence, it is obviously expected to bring about a much smaller error. 17.1.2.3 Higher Order Interpolation

Following approaches similar to those leading to the trapezoidal and Simpson’s rules, it is possible to derive formulae for higher order quadrature approaches – typically of the Newton and Cotes’ type; the underlying expressions read b

f x dx ≈

a

b−a n j Hn, j + 1 f a + b −a αn j = 0 n

17 183

in general, with integer coefficients, αn and Hn,j, satisfying H n, j −1 n− j = αn j n − j n

An, j

η η− 1 η −2 … η −n dη; j = 0,1, …,n η −j 0 n

17 184

For instance, the coefficients of the trapezoidal rule, corresponding to n = 1, become accessible as A 1, 0 =

− 1 1− 0 0 1−0 1

η η −1 −1 dη = η− 0 0 1 0 1

2

1

η− 1 dη = −

0

η2 −η 2

1 0

2

17 185

1 0 1 1 −1 + −0 = − − =− = 2 2 2 2 and A1, 1 =

− 1 1−1 1 1 −1 1

η η− 1 −1 dη = 1 0 η− 1 0 1

0 1 0

1

ηdη =

η2 12 02 1 = − = , 2 0 2 2 2

17 186

for j = 0 and j = 1, respectively – consistent with Eq. (17.68), as expected, and readily obtained with the aid of the fundamental theorem of integral calculus; while

Numerical Approaches to Integration

a + 0(b − a)/1 = a and a + 1(b − a)/1 = a + b − a = b as per Eq. (17.183) serve as interpolation points. By the same token, the coefficients of Simpson’s rule, corresponding to n = 2, may be calculated as A2, 0 =

=

=

− 1 2− 0 0 2 −0 2 1 4

2

η η− 1 η −2 −1 2 dη = 022 η −0 0 2

η2 − 3η + 2 dη =

0

2

η −1 η− 2 dη

0 2

1 η3 η2 −3 + 2η 4 3 2

= 0

1 23 22 −3 + 2 2 4 3 2

17 187

1 8 12 1 16 −36 + 24 1 4 1 − +4 = = = 4 3 2 4 6 46 6

for the first coefficient (i.e. j = 0), A2, 1 =

− 1 2− 1 1 2 −1 2

=− =−

1 2

2

η η− 1 η −2 −1 dη = η −1 1 12 0 2

η2 −2η dη = −

0

1 η3 2 −η 2 3

2

2

=− 0

η η −2 dη

0

1 23 2 −2 2 3

17 188

1 8 1 8 − 12 4 −4 = − = 2 3 2 3 6

for the second coefficient (i.e. j = 1), and A2, 2 =

=

− 1 2− 2 2 2 −2 2 1 η3 η2 − 4 3 2

η η− 1 η −2 −1 0 dη = 202 η −2 0 2

2

= 0

2 0

η η− 1 dη =

1 4

2

η2 − η dη

0

1 23 22 1 8 1 8 −6 1 2 2 1 −2 = = = = = − 4 3 2 4 3 4 3 4 3 12 6 17 189

for the third coefficient (i.e. j = 2) − all consistent with Eq. (17.124), further to a + 0(b − a)/2 = a, a + 1(b − a)/2 = (2a + b − a)/2 = (a + b)/2, and a + 2(b − a)/2 = a + b − a = b for interpolating points. Other values of Hn,j’s and αn are tabulated in Table 17.1 for n up to 10. Inspection of this table, with the aid of Eq. (17.184), unfolds n n

n

A n, j j=0

Hn, j = αn j=0

H n, j j=0

αn

=

αn =1 αn

17 190

as general property relating to the germane coefficients – while (b − a)/n is oftentimes referred to as step size, h. One further realizes that the values for the coefficients at the quadrature points are the same when symmetrically located, i.e. An, j = An, n −j ,

17 191

and that a negative sign always accompanies the expression for ε, due to reasons presented previously for the trapezoidal rule – but similarly valid for every rule, in view of the even nature of the order of the derivative at stake. Furthermore, all Hn, j are positive up to n = 7, whereas negative values appear for n = 8 and n ≥ 10. Equation (17.184) first

769

n j = 0 Hn,j ,

Table 17.1 List of (integer) coefficients Hn,j and αn = integral of f {x} over interval [a,b] – with ξ [a,b].

each associated to the jth interpolation point of an nth-degree quadrature rule, and error ε of associated

j 0

n 1 2 3

1

1 1 3

2

3

4

5

6

ε

αn

5

4

1

d4 f x dx4

ξ

5

9

3

3 b −a 24 − 80 3

d4 f x dx4

ξ

8 b −a 180 − 945 4

9

24

64

14

5

95

375

250

250

375

95

6

41

216

27

272

27

216

3 956

10

1 b −a 6 − 90 2

64

8

9

1

14

5 257

8

1 d2 f x b− a 3 2 − 12 dx2

4

7

7

25 039 23 552

9 25 713 141 669

9 261 −3 712 9 720

20 923 41 984 174 096

20 923 −18 160 52 002

9 261 41 984 52 002

41 25 039 −3 712 174 096

9 720

d6 f x dx6

1440 −

275 b −a 12 096 5

840 −

9 b −a 1 400 6

7

9

ξ

d6 f x dx6

d8 f x dx8

ξ

ξ

8 183 b −a 9 d 8 f x 120 960 − 518 400 7 dx8

5 257 23 552

7

ξ

2 368 b −a 113 400 − 467 775 8

3 956 141 669

25 713

173 b −a 806 400 − 14 620 9

11

11

ξ

d10 f x dx10

d10 f x dx10

1 346 350 b− a 10 80 335 531 500 −242 625 1 362 000 −1 302 750 2 136 840 −1 302 750 1 362 000 −242 625 531 500 80 335 2 993 760 − 326 918 592 10

13

ξ

ξ

d 12 f x dx12

ξ

Numerical Approaches to Integration

appeared in a letter from Newton to Leibnitz, back in 1676; another Newton and Cotes’-type rule that is occasionally encountered reads f a + 5f a + b

f x dx ≈

a

3 b −a 10

1 b −a 6

3 b−a 6 5 + 5f a + b − a 6 + 6f a +

+f a+

+f a+

2 b− a 6

4 b −a 6

17 192

+f b

– and is known as Weddle’s rule. Despite their higher accuracy, as apparent from inspection of the last column of Table 17.1, it is preferable to use a larger number of composition steps (to be discussed next) – each resorting to a lower order rule, for a given total number of calculations; this is so because higher order polynomials tend to unwildly oscillate, thus deviating from the (typically) smooth functions encountered in practice. 17.1.3

Composite Methods

When the amplitude b − a of the interval of integration is, in some sense, small, Simpson’s rule suffices to provide an adequate approximation to the exact integral – thus justifying its popularity; here small means that the kernel behaves in a smooth fashion throughout said interval. When the function is highly oscillatory, or lacks (finite) derivative(s) at some point(s), Simpson’s rule may, however, yield poor results. One common way to circumvent this issue is by breaking up the interval into a number of smaller subintervals – with application of Simpson’s rule (or the like) to each subinterval; the results are then summed up, to eventually produce an approximation to the integral over the entirety of the interval. In the aforementioned approach, termed composite quadrature, the interval of integration is accordingly divided into N subintervals, i.e. N

b

xi

f x dx = a

f x dx, i=1

17 193

xi− 1

in agreement with Eq. (11.124) – where xi denotes a quadrature point; an integration rule of choice is then applied to each such subinterval [xi−1,xi]. By default, the original interval is split uniformly, thus generating subintervals of identical length, (b − a)/N; although more complex from a numerical point of view, it is sometimes advantageous to resort to subintervals of disparate lengths, so as to concentrate computation effort in regions where the kernel is more ill-behaved – via the so-called adaptative approaches. The simplest composite method encompasses the rectangular rule, namely, Eq. (17.11) leading to b

N

f x dx ≈

a

xi −xi− 1 f xi− 1 ,

17 194

i=1

or instead Eq. (17.12) conveying b a

N

f x dx ≈

xi −xi− 1 f xi ; i=1

17 195

771

772

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

equidistance between quadrature points means that b−a ; i = 1,2, …, N, N and consequently xi −xi−1 =

17 196

b− a ; i = 0, 1,…, N N In view of Eqs. (17.196) and (17.197), one may redo Eq. (17.195) as xi = a + i

b

N

f x dx ≈

a

b−a b −a b −a N b −a f a+i = , f a+i N N N N i=1 i=1

17 197

17 198

where (b − a)/N was factored out to advantage; when N is very large, the approximate equality in Eq. (17.198) becomes a true equality, and Eq. (11.90) is recovered in full – as Riemann’s definition of definite integral. Therefore, the endpoint (rectangular) rule becomes exact if each subinterval of integration is sufficiently narrow. In the case of the midpoint rule, one obtains b

N

f x dx ≈

a

xi − xi− 1 f i=1

xi−1 + xi 2

17 199

stemming from Eq. (17.13), which degenerates to N

b

f x dx = a

i=1

b−a = N

b−a f N N

2a + 2i− 1 f

i=1

a + (i −1

b−a b− a + a+i N N 2 b− a N

2

=

b− a N 1 b− a f a + i− N i=1 2 N 17 200

at the expense of Eqs. (17.196) and (17.197), and upon factoring out (b − a)/N and lumping terms alike. By the same token, the trapezoidal rule supports b

N

f x dx ≈

a

xi − xi− 1 f xi−1 + f xi 2 i=1

17 201

as inspired by Eq. (17.68), which may be rewritten as N

b

f x dx = a

i=1

b −a N f a + i −1 b − a + f a + i b− a N N 2 17 202 N

=

b−a b −a b−a f a + i− 1 +f a+i 2N i = 1 N N

Numerical Approaches to Integration

773

again with the aid of Eqs. (17.196) and (17.197) – complemented with factoring out of (b − a)/2N; Eq. (17.202) may be further manipulated as b

f x dx = a

f a + 1−1

b−a 2N

N −1

+

f a+i i=1

N b− a b −a + f a + i− 1 N N i=2

b −a b−a +f a+N N N

N −1 b−a b−a f a + f a+j + = 2N N j=1

N −1 i=1

,

b− a f a+i +f b N

N

=

b−a b−a f a +2 f a+i +f b 2N N i=1 17 203

after making the first term explicit in the first summation and the last term explicit in the second summation – and eventually collapsing the summations thus generated, once the counting variable of the first summation has been reduced by unity (i.e. j ≡ i – 1). Composite Simpson’s rule (and higher-order rules) require extra attention because interior points to each subinterval are required, i.e. b

N

f x dx ≈

a

xi −xi− 1 f xi−1, 0 + 4 f xi − 1, 1 + f xi − 1, 2 , 6 i=1

17 204

following insertion of Eq. (17.124) in Eq. (17.193); here xi,j abides to b−a b−a ; i = 1, 2,…, N; j = 1, 2, …, n, xi− 1, j − xi−1, j −1 = N = n nN

17 205

and thus xi− 1, j = xi− 1, 0 + j

b −a ; i = 1, 2, …, N; j = 0,1, …,n nN

17 206

as interpolation point of a generic nth-order composite rule, with xi – 1,0 ≡ xi. Insertion of Eqs. (17.196), (17.197), and (17.206) in Eq. (17.204), with n = 2, gives rise to N

b −a N 6

b

f x dx = a i=1 N

b− a 6N

= i=1

=

b−a 6N

f a + i− 1

b−a b− a +2 N 2N b −a 1 b− a f a + i −1 + 4 f a + i −1 + N 2 N

+ f a + i −1

+ f a + i− 1 + 1

N

f a + i− 1 i=1

b− a b −a b−a b −a +0 +1 + 4 f a + i −1 N 2N N 2N

,

b −a N

b −a 1 b−a b− a + 4 f a + i− +f a+i N 2 N N 17 207

774

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where condensation of terms alike meanwhile took place. After taking the summation sign into the parenthesis, Eq. (17.207) becomes N b −a b− a + f a + i− 1 N N i=2

f a + 1 −1 b

f x dx = a

b −a 6N

N

f a + i−

+4 i=1

N −1 1 b−a b −a b− a + f a+i +f a+N 2 N N N i=1

N −1

f a+j

f a + =

b −a 6N

j=1 N −1

b −a + f a + b −a N

f a+i

+

N b−a 1 b −a +4 f a + i− N 2 N i=1

i=1

17 208 – where the first term of the first summation and the last term of the third summation were spelled out once more, and the counting variable of the first summation, i, was replaced by j ≡ i − 1 for mathematical convenience; Eq. (17.208) may be further condensed to N −1 b

f x dx = a

b−a 6N

f a +4

f a + i−

i=1

+ 4f a + N −

N −1 1 b−a b−a +2 f a+i 2 N N i=1

1 b −a +f b 2 N 17 209

after collapsing the two summations on f {a + i(b − a)/N}, and making the last term of the intermediate summation explicit. The errors associated to an n-stage quadrature method, when applied to the whole interval [a,b], are all of the form εn = βn n, a, b

b−a n

p

,

17 210

as inductively derived from inspection of Table 17.1; here βn denotes a proportionality constant – which depends on a and b in view of a ≤ ξ ≤ b, since the (p – 1)th-order derivative is evaluated at ξ. If the original interval, of amplitude b − a, is split into N equal-sized subintervals, then the error of the associated composite quadrature method, E, should look like N

εn, i ,

E

17 211

i=1

where εn,i denotes error of each substep as per Eq. (17.210). Insertion of Eq. (17.210) supports indeed transformation of Eq. (17.211) to N

β n, i

E=

b−a p N , n

17 212

i=1

because each subinterval has now amplitude (b − a)/N; if the maximum of all βn,i’s were denoted as βmax, according to

Numerical Approaches to Integration

βmax

max βn, i ; i = 1, 2,…, N,

17 213

then one would conclude that N

b−a nN

βmax

E≤ i=1

p

= βmax N

b −a nN

p

17 214

because the function originally under the summation would not depend on (counting) variable i – or else b −a n N p −1

E a, b ≤ βmax

p

17 215

Comparison between Eqs. (17.210) and (17.215) indicates that the order of accuracy, p, of any given quadrature rule is kept when constructing a composite rule therefrom; furthermore, the overall error increases with the amplitude of the original interval, b – a, but decreases with the order n of the method. On the other hand, for a given total number nN > b − a of calculations of the kernel function, a higher order of accuracy p ≥ 1 leads to a hyperbolic-type decrease in E as per Eq. (17.214); however, it is better to decrease n at the expense of increasing N, for a given nN, so as to guarantee a smoother interpolating polynomial (as mentioned before) – even though (1/n)p decreases faster with an increase in n than (1/N)p–1 with a similar increase in N, see Eq. (17.215). Subdivision of the interval [a,b] is recommended in terms of lower overall error only when p > 1, whereas p = 1 would make E independent of N in Eq. (17.215).

17.1.4

Infinite and Multidimensional Integrals

Although a number of numerical methods have been developed specifically for quadrature when (at least one of ) the limits of integration are infinite, conversion to an integral with finite bounds is recommended as preliminary approach – followed by application of one of the standard methods discussed so far. Along these lines, one may recall the rule of change of variable of a definite integral, i.e. Eq. (11.178), and define ξ , 17 216 x 1− ξ2 and thus dx =

1 − ξ2 − ξ − 2ξ 1 −ξ

2 2

dξ =

1 + ξ2 1 − ξ2

2 dξ

17 217

upon differentiation of both sides following Eqs. (10.1) and (10.138); furthermore, Eq. (17.216) entails ξ

lim + x = −1

−1 1 − −1 +

2

=−

1 1 − = − + =− ∞ 1−1 0

17 218

and likewise lim− x =

ξ

1

1 1 − 1−

2

=

1 1 = = ∞; 1 −1 − 0 +

17 219

775

776

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

use of Eqs. (17.216)–(17.219) supports ∞

1

−∞

f x dx =

−1

1 + ξ2 1−ξ

2 2

f

ξ dξ, 1 − ξ2

17 220

where both lower and upper limits of integration are now finite. If only the upper integration limit is infinite, one should instead resort to ζ , 1−ζ

a+

x

17 221

which implies ζ

lim+ x = a + 0

0 0 =a+ =a+0=a 1−0 1

17 222

1 1 = a + + = a + ∞ = ∞; 1 − 1− 0

17 223

as well as lim− x = a +

ζ

1

the differential of ζ may also be obtained from Eq. (17.221) as 1 − ζ − ζ −1 1 dζ = dζ, 2 1 −ζ 1−ζ 2

dx

17 224

following again Eqs. (10.1) and (10.138) – so combination of Eqs. (17.221)–(17.224) leads to ∞

1

f x dx =

f a+ 0

a

ζ 1−ζ

dζ 1−ζ

2,

17 225

assuming that a is finite. The remaining case pertains to a definite integral with unbounded lower limit, i.e. −∞, but finite upper limit a; after setting a−

x

1 −χ , χ

17 226

application of the theorems on limits unfolds χ

lim+ x = a − 0

1−0 1 = a − + = a− ∞ = − ∞ , 0+ 0

17 227

complemented with lim− x = a−

χ

1

1−1 0 = a− = a− 0 = a 1 1

17 228

Upon application of the differential operator to both sides, Eq. (17.226) becomes, in turn, dx



− 1 χ − 1 −χ 1 dχ = 2 dχ, χ χ2

17 229

Numerical Approaches to Integration

so Eqs. (17.226)–(17.229) prompt 1

a −∞

f x dx =

f a−

0

1 − χ dχ χ χ2

17 230

with a finite range of integration being once more justified. The quadrature rules discussed previously are all designed to compute onedimensional integrals; if multiple dimensions are at stake, one possible approach is to phrase the multiple integral as repeated one-dimensional integrals – via application of Fubini’s theorem, see Eq. (11.250). However, the number of function evaluations increases exponentially with the number of dimensions, owing to the underlying multiplicative nature – so specific methods have been designed to circumvent such severe limitations; these include Monte Carlo methods, Smolyak’s sparse grid method, and Bayesian quadrature (all of which are out of the scope of this book, though).

17.2 Integration of Differential Equations Differential equations can describe nearly all systems undergoing change; this is why they appear ubiquitously as tools for mathematical simulation in science and engineering. The problem of building a function that satisfies an initial value problem, as described by an ordinary differential equation, can however be reduced to that of evaluating an integral; in fact, dF x =f x dx

x

=F a

F x x=a

F x =F a +

f ξ dξ,

17 231

a

based on the rule of differentiation of an integral as per Eq. (11.295) – and in full agreement with the first fundamental theorem of integral calculus, see Eq. (11.159). Therefore, the methods developed before for quadrature should, in principle, be usable in solving the associated ordinary differential equation. Nevertheless, the most frequent problems encompass f ≡ f{x, F} rather than merely f ≡ f {x} – so dedicated methods are in order, as explored below. The most important techniques to solve differential equations, based on numerical approximation, were developed prior to the advent of programmable computers; hence, they are intrinsically based on extrapolation of a function using its slope – and often require more than one step, so that the extrapolation hypothesis remains reasonable. With the current widespread use of computers for numerical calculation, the time taken to produce a solution has been steadily decreasing – yet the time taken to decide whether said solution is right remains still long. Numerical approximation to the solution of a differential equation is a must when a symbolic calculation (or analytical solution) does not exist, or cannot be found at all, or an approximation suffices for the intended goal; the typical problem consists of a firstorder differential equation set forth as an initial value problem, say, dy = f x, y dx

17 232

coupled with y

x=0

= y0 ,

17 233

777

778

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

which somehow mimics Eq. (17.231) except for the multivariate nature of f. If a boundary condition is available at x = a, then a change of independent variable x to x − a should be done in advance; the left-hand side of Eq. (17.232) would accordingly be kept, since d(x − a) = dx, while its right-hand side would be recoined as f {a + (x – a), y}, and Eq. (17.233) would read y|x − a = 0 = y0. Higher-order differential equations, viz. f x, y,

dy d 2 y d 3 y d n− 1 y d n y , 2 , 3 , …, n− 1 , n = 0, dx dx dx dx dx

17 234

with f representing a number of algebraic operations, may in general be handled after constructing a set of n first-order differential equations in n dependent variables, y, z1, z2, z3, …, zn−1, according to dy dx dz1 z2 = dx dz2 z3 = dx , … dzn−2 zn− 1 = dx z1 =

f x,y, z1 ,z2 , …, zn− 1 ,

17 235

dzn− 1 =0 dx

complemented with n boundary conditions, say, y z1

x=0

x=0

= y0 =

dy dx

x=0

d2 y z2 = 2 x = 0 dx

x=0

17 236

3

z3

zn− 1

x=0

x=0

= … =

d y dx3

x=0

d n− 1 y dx n −1

x=0

Simultaneous integration would then be required of all equations labeled as Eq. (17.235), with the aid of the relationships conveyed by Eq. (17.236). There are basically three types of numerical methods suitable to solve ordinary first-order differential equations: linear single- or multistep (also known as Adams’) methods, and single-step, multistage (also known as Runge and Kutta’s) methods. For either type, both implicit and explicit methods exist, with the former being more general, and thus more complex to solve, yet faster to converge (as will be addressed below). For an interval [a,b] of interest in terms of y{x}, subdivision in N subintervals is normally recommended; the amplitude of the latter is often uniform (as assumed by default, if not otherwise stated), and thus equal to (b − a)/N. As will be seen shortly, the numerical methods of practical

Numerical Approaches to Integration

interest are typically characterized by a local error εi, at x = a + i(b − a)/N with 0 ≤ i ≤ N, proportional (via constant αi) to said amplitude raised to some (integer) exponent p, i.e. b −a N

εi = αi N, p, a, b

p

17 237

that resembles Eq. (17.210); the overall error will then be given by N

N

εi =

E i=1

αi i=1

b−a p , N

17 238

at the expense of Eq. (17.237), and consistent with Eqs. (17.211) and (17.212). If the maximum of all αi’s is denoted as αmax, i.e. αmax

max αi ; i = 1,2, …, N,

17 239

then one obtains N

αmax

E≤ i=1

b −a N

p

=

b−a N

p N

αmax = αmax N i=1

b−a N

p

17 240

based on Eq. (17.238), and in view of the definition of multiplication – or simply E

a, b

≤ αmax

b −a p , N p− 1

17 241

once powers of N are lumped; note the resemblance between Eqs. (17.215) and (17.241). Therefore, the average error of the numerical solution of an ordinary differential equation decreases in a hyperbolic fashion with the number N of subintervals (for a given p) – as long as p > 1, otherwise it would become insensitive to N; but decreases in an exponential fashion when the order p of the method increases, for N > b − a. Therefore, preference for higher-order methods is justified, with enhanced accuracy for a given N.

17.2.1

Single-step Methods

The simplest approach to integration of a first-order differential equation, as per Eq. (17.232), is to replace the derivative by a difference, according to dy y x + h − y x ≈ , dx h

17 242

thus assuming that function y{x} behaves linearly throughout said interval; when h 0, Eq. (17.242) degenerates to Eq. (10.21), i.e. the true definition of derivative. Upon elimination of denominators, Eq. (17.242) becomes h

dy = y x + h −y x , dx

17 243

where dy/dx may, in turn, be removed at the expense of Eq. (17.232), i.e. hf x,y = y x + h − y x ;

17 244

isolation of y{x + h} finally leads to y x + h = y x + hf x, y

17 245

779

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Equation (17.245) may be rephrased as yi + 1 = yi + hf xi , yi ; i = 0, 1,…,

17 246

provided that yi

y ih ,

17 247

and thus yi + 1

y i+1 h

17 248

– for a uniform-size progression along the independent variable, i.e. xi

17 249

ih;

for i = 0, Eq. (17.247) becomes y0

y 0 ,

17 250

compatible with the boundary condition labeled as Eq. (17.233). Application of Eq. (17.246) starts with selection of step size h, thus leading to the arithmetic sequence 0, h, 2h, 3h, … of values for x in agreement with Eq. (17.249); coupled with retrieval of the boundary condition yielding y0, corresponding to x = x0 = 0 (and thus i = 0) – which allows y1 be obtained via Eq. (17.246) from y0 and f {0,y0}. One then sets i = 1, so x = x1 = h, while Eq. (17.246) permits calculation of y2 at the expense of y1, and the value of f evaluated at x = x1 and y = y1; one should proceed to i = 2 afterward, accordingly set x = x2 = 2h, and once again resort to Eq. (17.246) to obtain y3 using y2 and f {x2 ,y2} – and so on, as deemed necessary. This iterative process is illustrated in Fig. 17.4. As expected, the approximation to the true solution, obtained as N ∞, improves when the number N of subdivisions increases – because the hypothesis underlying safe extrapolation becomes better and better; note the coincidence in slope of the first extrapolation in Eq. (17.4), irrespective of N, although holding for a shorter range of x as N increases. Equation (17.246) consubstantiates Euler’s method – named after Leonhard Euler, who described it in 1768; it is an explicit method, because the unknown value, yi+1, is obtained using only the (already available) value of yi, besides obviously xi.

Figure 17.4 Solution of differential equation as y ≡ y{x}, within given range of independent variable x, for various sizes of increment in x corresponding to N subdivisions of said range, namely, 3, 5, or 10, ∞). overlaid on exact solution (corresponding to N

y0

y

780

N=3 0

5 x

10



Numerical Approaches to Integration

An alternative definition of derivative is possible, viz. dy y x −y x− h ≈ , 17 251 dx h to be used en lieu of Eq. (17.242) – in which case backward Euler’s method would be at stake; this leads to dy = y x −y x− h 17 252 dx after getting rid of denominator, while insertion again of Eq. (17.232) unfolds h

hf x, y = y x − y x−h

17 253

One may algebraically rearrange Eq. (17.253) to read y x = y x− h + hf x, y ,

17 254

which may in turn appear as yi + 1 = yi + hf xi + 1 , yi + 1 ; i = 0,1, …

17 255

for consistency with Eqs. (17.246)–(17.249) – i.e. yi + 1 ≡ y{x} and yi ≡ y{x − h}. Equation (17.255) materializes an implicit method, known as Euler’s implicit method – meaning that an equation (not necessarily linear, depending on the functional form of f ) has still to be solved to find yi+1, once in possession of yi. Despite the longer time required to solve said equation, an implicit method is normally more stable – a critical deed when attempting to solve a stiff differential equation. Remember that a stiff equation exhibits intrinsic instability due to quite different dynamic scales in the original problem – as often happens with time scale of chemical kinetics and external control, and was first pointed out by Charles Curtiss and Joseph O. Hirschfelder in the 1950’s. Therefore, a larger size h may be used, thus speeding up computation due to a lower number of steps, which makes up for the extra computation time required to solve the underlying implicit equation. Notwithstanding its simplicity in conceptual and implementation terms, Euler’s method is often not accurate enough, due to its intrinsic first-order nature – which causes deterioration in predicting ability, as a linear model hardly predicts an increasing curvature of the original function. To alleviate this shortcoming, the amplitude of the interval must be kept relatively small – knowing that a larger number of steps demand more calculations; hence, the increase in accuracy is obtained at the expense of larger roundoff error and longer computation time. This calls for subdivision of the target interval in smaller subintervals – an iterative procedure already followed when designing composite methods of quadrature. An alternative approach to numerical integration of an ordinary differential equation when f ≡ f {y} assumes that f can be split into a linear portion, with L constant, and a nonlinear portion, N{y} – in which case Eq. (17.232) would putatively take the form dy = f y = − Ly + N y ; dx

17 256

upon multiplication of both sides of Eq. (17.256) by e−Lxdx, i.e. e −Lx dy = − Ly −N y e −Lx dx,

17 257

781

782

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

one may proceed to integration between xi and xi+1 = xi + h, according to e − Lxi

yi + 1 yi

xi + 1

dy = − Lyi −N yi

e − Lx dx

17 258

xi

This step entails varying y between yi and yi+1 in the left-hand side of Eq. (17.258), while keeping the x-dependent function constant at xi; and varying the x-dependent function in the right-hand side between xi and xi+1, while keeping the y-dependent function constant at yi. The fundamental theorem of integral calculus supports transformation of Eq. (17.258) to e − Lxi y

yi + 1 yi

e −Lx −L

= − Lyi − N yi

xi + 1

17 259 xi

that degenerates to e − Lxi yi + 1 −yi =

Lyi −N yi L

e −Lxi + 1 − e − Lxi ;

17 260

division of both sides by e −Lxi , and simplification of the first factor in the right-hand side then yield yi + 1 − yi = yi −

N yi L

e −Lxi + 1 −1 , e −Lxi

17 261

1 − e−L

17 262

which eventually becomes yi + 1 = yi +

N yi − yi L

xi + 1 −xi

following isolation of yi+1 and lumping of the two exponential functions. After realizing that h is but xi+1− xi, and further eliminating parentheses, Eq. (17.262) transforms to yi + 1 = yi +

N yi N yi −Lh − e − yi + yi e −Lh , L L

17 263

or else yi + 1 = e −Lh yi +

1 −e −Lh N yi L

17 264

upon cancelling symmetrical terms and factoring N{yi}/L out afterward. Practical experience has indicated that Eq. (17.264) is quite robust in terms of stability (besides being explicit); this is due to the difference weights ascribed to the linear portion (i.e. yi) and the nonlinear portion (i.e. N{yi}) – via e−Lh and (1 − e−Lh)/L, instead of −L and 1, respectively, as in Eq. (17.256). The underlying rationale for this approach comes from the integrating factor method, conceived to exactly solve a first-order ordinary differential equation, see Eqs. (15.23) and (15.29); this also justifies why Eq. (17.264) is known as exponential integrator method. 17.2.2

Multistep Methods

Conceptually speaking, a linear numerical method for integration of an ordinary differential equation starts from a (given) initial point, and then takes a (putatively short) step forward to find the next solution point; the process is then iterated, with subsequent steps intended to map out the whole solution. Single-step methods – as is the case of

Numerical Approaches to Integration

Euler’s method introduced before, refer to only one previous point and its derivative to find the current value; as will be seen next, Runge and Kutta’s methods evolve through a number of intermediate stages to produce a higher order method – but discard all previous information before taking the next step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps, rather than neglecting it; such methods refer indeed to more than one previous point and derivative value, and are normally linear – meaning that a linear combination of values of those functions and derivatives is taken. In a typical s-step method, the current value yi+s is calculated as a linear combination of (previous) yi, yi+1, …, yi+s−1, as well as f {xi ,yi}, f {xi+1,yi+1}, …, f {xi+s−1,yi+s−1} on account of the corresponding derivatives as per Eq. (17.232); this corresponds to stating s

yi + s +

s

as− j yi + s− j = h j=1

bs−j f xi + s− j ,yi + s− j ,

17 265

j=0

of which Eq. (17.246) is a particular case. Each method is thus specified by 2s + 1 coefficients, i.e. a0, a1, …, as−1, b0, b1, …, bs; choice among them usually aims at a balanced compromise between good approximation and ease of implementation (thus justifying why many coefficients are often taken as nil). A division may again be made between explicit and implicit methods. The former type enforce bs = 0, so one may obtain yi + s via straightforward solution of Eq. (17.265); otherwise, yi + s will also appear as argument in f {xi+s, yi+s}, thus requiring solution of a nonlinear equation with regard to yi+s, often via Newton and Raphson’s method as per Eq. (7.262). The most common explicit approaches, known as Adams and Bashforth’s methods, are based on polynomial interpolation; the associated coefficients are tabulated in Table 17.2. To determine the aforementioned coefficients, one should proceed to polynomial interpolation and accordingly find a polynomial Ps−1{x}, of degree s − 1, such that Ps −1 xi + j = f xi + j , yi + j ; j = 0,1, …, s − 1;

17 266

Table 17.2 Characteristic coefficients, as – j (j = 1, 2, …, s) and bs – j ( j = 0, 1, …, s), of Adams and Bashforth’s linear s-step (explicit) methods of integration of ordinary differential equation. as−j j

s

1

2

3

bs−j 4

1

−1

2

−1

0

3

−1

0

0

4

−1

0

0

0

5

−1

0

0

0

5

0

1

0

1

0 0 0 0

0

3 2 23 12 55 24 1901 720

2

3

4

5

1 2 4 − 3 59 − 24 1387 − 360

5 12 37 24 109 30

3 8 637 − 360

251 720





783

784

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

the best approach is again Lagrange’s polynomials as per Eqs. (17.56) and (17.57), i.e. s− 1

s−1

x− xi + k , xi + j − xi + k

f xi + j , yi + j

Ps − 1 x j=0

k =0 k j

17 267

which may algebraically be rearranged to

s−1

f xi + j ,yi + j

s −1 s− 1

s− 1

f xi + j , yi + j

Ps−1 x =

x −xi + k =

s−1

xi + j − xi + k

j=0

x− xi + k k =0 k j

j −1

k =0 k j

xi + j − xi + k

j=0

xi + j −xi + k

k =j+1

k =0

k =0 k j

s−1

17 268 – after splitting the original (single) extended product, and then the extended product of xi+j − xi+k between positive and negative factors (knowing that xi+j+1 > xi+j). When xi+j+1 − xi+j = h for every j (as usually taken), one finds that xi + j − xi + k = k − j h

17 269

– so one may redo Eq. (17.268) to s−1 s−1

x −xi + k

f xi + j , yi + j k =0 k j

Ps −1 x =

j− 1

s−1

j −k h

j=0

−1 k −j h k =j+1

k =0

17 270

s− 1

f xi + j , yi + j

s−1

x− xi + k k =0 k j

=

j −1

j− 1

j −k

j=0

−1

k =0 k =j+1

k =0

s −1

s− 1

h

k −j

k =j+1

s−1

h k =j+1

– where the concept of power and a convenient change of counting variables to m in the extended products appearing in denominator permit simplification to s−1

f xi + j , yi + j

s−1

x− xi + k k =0 k j

Ps −1 x =

j

mh j − 1

j=0

s−1 −j

m=1

s− 1−j

mhs −1− j

m=1

,

s−1 s−1

x −xi + k

f xi + j , yi + j k =0 k j

=

s−j −1

j j=0

−1

s−1− j

m m=1

m=1

mh j h s −1 −j

17 271

Numerical Approaches to Integration

or else s− 1

f xi + j , yi + j

s−1

x− xi + k k =0 k j

Ps −1 x =

−1

s− j− 1

17 272

j s − j −1 h s −1

j=0

after recalling the definition of factorial and lumping hj with hs − 1 − j. The polynomial conveyed by Eq. (17.272) is locally a good approximation of the right-hand side of Eq. (17.232) due to Eq. (17.266), so one may replace the former by the approximate equation dy ≈ Ps −1 x 17 273 dx – subjected to y

x = xi + s−1

= yi + s− 1

17 274

and y

x = xi + s

= yi + s ;

17 275

integration of Eq. (17.273) by separation of variables – between the limits labeled as Eqs. (17.274) and (17.275), unfolds yi + s

xi + s

dy = yi + s−1

17 276

Ps −1 x dx xi + s− 1

Following insertion of Eq. (17.272), one obtains s− 1 xi + s

y

yi + s yi + s −1

s− 1

x −xi + k

f xi + j , yi + j k =0 k j

=

−1 xi + s−1

s− j− 1

j s − j −1 h s −1

17 277

dx

j=0

from Eq. (17.276) – with the aid of the fundamental theorem of integral calculus; the linearity of both integral and summation operators permits their exchange as s−1

−1

yi + s −yi + s −1 =

s− j− 1

xi + s

f xi + j , yi + j

s −1

x− xi + k dx

s−1 j=0

j s −j −1

h

xi + s − 1 k = 0 k j

,

k =0 k j s− 1

−1

=h j=0

s−j −1

f xi + j , yi + j j s − j− 1

xi + s

s −1

xi + s−1 k = 0 k j

x− xi + s −1 + xi + s − 1 −xi + k dx h h

17 278 together with multiplication and division by h, addition of xi + s − 1 and its negative to the numerator, and realization that (−1)s−j−1 = 1/(−1)s−j−1 and h s − 1 = new integration variable ξ as

s −1

k =0 k j

h. Definition of a

785

786

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

x −xi + s−1 h leads, upon differentiation of both sides as per Eq. (10.1), to ξ

17 279

d x− xi + s−1 dx = h h – and, in terms of asymptotic trends, to dξ =

x

17 280

xi + s−1 − xi + s −1 = 0, h

17 281

xi + s − xi + s−1 h = =1 h h

17 282

lim ξ = xi + s −1

as well as x

lim ξ = xi + s

due to the aforementioned constant spacing, h, between adjacent nodes, xi + s – 1 and xi + s, of x. Furthermore, one finds that xi + s− 1 − xi + k = s − 1 −k h

17 283

in general agreement with Eq. (17.269) should k < s − 1, which implies s−1 s− 1 x− xi + s − 1 + xi + s−1 −xi + k x−xi + s − 1 xi + s −1 −xi + k + = h h h k =0 k j

k =0 k j s−1

s− 1 −k h = h

ξ+

= k =0 k j 0

s−1

ξ+l =

= l = s− 1 l s− j− 1

s−1

ξ + s−k −1

k =0 k j

ξ+l

l=0 l s −j −1

17 284 after coupling with Eq. (17.279) – and intermediate change from (dummy) variable k to l, defined as l

s − k − 1;

17 285

reformulation of Eq. (17.278) is now in order, according to s− 1

yi + s + − 1 yi + s−1 = yi + s − yi + s−1 = h

−1

j=0

s−1

=h j=0

−1 s− j− 1 j s − j −1

1

0

s − j− 1

f xi + j ,yi + j j s −j −1

s−1

1

0

s−1

ξ + l dξ

l=0 l s − j−1

ξ + l dξ f xi + j ,yi + j

l=0 l s −j −1

17 286 and the aid of Eqs. (17.280)–(17.282) and (17.284). Inspection of Eq. (17.286) vis-à-vis with Eq. (17.265) enforces as −1 = − 1

17 287

Numerical Approaches to Integration

and as− j = 0; j = 2, 3,…, s,

17 288

as promptly grasped in Table 17.2 for as−j when j = 1, 2, …, s; as well bs = 0,

17 289

complemented with 1

−1 j bs − j− 1 = j s −j − 1

s 1

k + ξ dξ; j = 0,1, …, s−1

17 290

0 k =0 k j

– obtained after recovering the original variable k en lieu of l as per Eq. (17.285), and proceeding likewise with regard to j outside the integral sign, i.e. replacing s − j − 1 by j (and vice versa). For each s, Eqs. (17.289) and (17.290) will generate the corresponding b in Table 17.2, associated with j = 0, 1, …, s at a time. These nominal methods were initially designed by John C. Adams to numerically solve a differential equation simulating capillary action, as previously proposed by Francis Bashforth in 1883. Adams and Moulton’s methods constitute the implicit counterparts of Adams and Bashforth’s methods – and hold the same values for the a’s, but not for the b’s as per inspection of Table 17.3. The procedure to calculate the b’s in this table is similar to that developed for Adams and Bashforth’s method; however, the interpolating polynomial uses not only xi−1, xi−2, …, xi−s, but also xi itself. Therefore, the result looks like −1 j bs − j = j s−j

1

s

k −1 + ξ dξ; j = 0,1, …, s,

17 291

0 k=0 k j

which may be obtained directly from Eq. (17.290) after consistent replacement of s − 1 by s, and thus k by k – 1, duly based on Eq. (17.285). Even though this method was due solely to John C. Adams, the name of Forest R. Moulton became associated thereto since he was the first to realize its useful combination in tandem with Adams and Bashforth’s method – in the form of a predictor/corrector pair. Table 17.3 Characteristic coefficients, as − j ( j = 1, 2, …, s) and bs − j ( j = 0, 1, …, s), of Adams and Moulton’s linear s-step (implicit) methods of integration of ordinary differential equation. as−j j

s

1

2

bs−j 3

0

−1

1

−1

2

−1

0

3

−1

0

0

4

−1

0

0

4

0

1

2

3

1 2 2 3 19 24 646 720

1 12 5 − 24 264 − 720

1 24 106 720

4

1

0

1 2 5 12 3 8 251 720





19 720

787

788

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

In attempts to ascertain the error incurred in by either type of method, one should remember that the original basis was interpolation via Lagrange’s polynomials following Eq. (17.267) – i.e. f {x,y} was actually replaced by polynomial Ps−1{x,y}, with degree s − 1, laid out as in Eq. (17.272). After defining the error of the associated approximation, εs−1{x,y}, by εs − 1 x,y

f x, y −Ps−1 x, y ,

17 292

one realizes that εs−1{x,y} may putatively be written as s−1

εs − 1 x,y =

17 293

x− xi + k g x, y k =0

– because the error is nil at x = xi+j (j = 0, 1, …, s − 1) by virtue of Eq. (17.266), with g{x,y} denoting a function (still to be determined) that takes non-nil values at x xi+j. One may now construct an auxiliary function, Ω{x,y,ξ}, according to s−1

Ω x,y, ξ

f ξ, y − Ps− 1 ξ, y −

ξ − xi + k g x,y ;

17 294

k =0

besides being nil at ξ = xi+k (k = 0, 1, 2, …, s − 1) due to Eq. (17.266) encompassing the first two terms, and ξ – xi + k = 0 encompassing the generic factor of the extended summation, one realizes that Ω{x,y,ξ} is nil at ξ = x as per replacement of the resulting f{x,y} − Ps−1{x,y} ≡ f {ξ,y}|ξ=x − Ps−1{ξ,y}|ξ=x by εs−1{x,y} given by Eq. (17.292), which may in turn be replaced s− 1 by sk−=10 x −xi + k g x, y k = 0 ξ − xi + k g ξ, y ξ = x as per Eq. (17.293) – so Ω{x,y,ξ} holds s + 1 zeros in all. Inspection of Eqs. (17.292)–(17.294) indicates that Ω{x,y,ξ} is a continuous function; according to Rolle’s theorem as conveyed by Eq. (10.269) for f {a} = f {b}, ∂Ω{x,y,ξ}/∂ξ necessarily holds (at least) one root between each of the aforementioned zeros of Ω{x,y,ξ} – thus summing up s zeros for such a derivative. A second application of Rolle’s theorem, between every two consecutive zeros of ∂Ω{x,y,ξ}/∂ξ, guarantees existence of at least s − 1 zeros for ∂2Ω{x,y,ξ}/∂ξ2 – and this process may be carried out until realizing that ∂ sΩ{x,y,ξ}/∂ξs possesses at least one zero, say, ξ = ζ, within (the original) interval [xi, xi+s−1], i.e. ∂ s Ω x,y, ξ ∂ξ s

=0

17 295

ξ=ζ

On the other hand, the sth-order derivative of Ω after Eq. (17.294) looks like ∂ s Ω x,y, ξ ∂ s f ξ, y ∂ s Ps− 1 ξ, y ∂s s−1 = − −g x, y ξ− xi + k , s s s ∂ξ s k = 0 ∂ξ ∂ξ ∂ξ

17 296

following direct application of Eqs. (10.106) and (10.120); which simplifies to ∂ s Ω x,y, ξ ∂ s f ξ, y ∂ s f ξ,y = − 0 − g x, y s = −s g x, y ∂ξ s ∂ξ s ∂ξ s

17 297

– because Ps−1 having degree s − 1 on ξ enforces ∂ s Ps −1 ∂ξ s = 0, while sk−1 = 0 ξ− xi + k represents a polynomial with ξs as highest exponent term, holding a unit coefficient as well. When ξ is set equal to ζ, Eq. (17.297) becomes ∂ s Ω x,y, ξ ∂ξ s

= ξ=ζ

∂ s f ξ, y ∂ξ s

ξ=ζ

− s g x, y ,

17 298

Numerical Approaches to Integration

so insertion of Eq. (17.295) yields ∂ s f ξ, y ∂ξ s

ξ=ζ

−s g x, y = 0

17 299

Function g{x,y} will accordingly be defined by g x,y =

1 ∂ s f ξ, y s ∂ξ s

,

17 300

ξ=ζ

as a direct consequence of Eq. (17.299); combination with Eq. (17.300) transforms Eq. (17.293) to εs−1 x,y =

1 ∂ s f ξ, y s ∂ξ s

s−1 ξ=ζ k =0

x− xi + k ,

17 301

where xi < ζ < xi+s−1 as seen above. On the other hand, xi < x < xi + s−1

17 302

spans the interval under scrutiny, which implies − kh = xi − xi + k < x−xi + k < xi + s− 1 −xi + k = s− 1 −k h

17 303

– obtained after subtracting xi+k from all sides, and recalling Eq. (17.269) with 0 ≤ k ≤ s − 1. Once the absolute value is taken of all sides, Eq. (17.303) turns to 0 < x− xi + k < max kh, s − 1 −k h = h max k, s− 1 −k ,

17 304

so Eq. (17.301) may be redone as εs −1 x, y
p. This is one strong reason for the popularity of the fourth-order method – because it exhibits the highest p for a given number of stages, and associated number of computer evaluations. For an implicit s-stage Runge and Kutta’s method, it has been found that p ≤ 2s – so a higher efficiency becomes apparent, yet at the expense of much more complex calculations at each stage. In attempts to derive the aforementioned Runge and Kutta’s explicit fourth-order method, one should resort to Eq. (17.333) rewritten for s = 4, viz. yi + 1 = yi + h b1 κ 1 + b2 κ2 + b3 κ 3 + b4 κ 4 ,

17 342

κ1

f xi , yi ,

17 343

κ2

f xi + c2 h, yi + ha2, 1 κ 1 ,

17 344

κ3

f xi + c3 h, yi + h a3, 1 κ 1 + a3, 2 κ 2 ,

17 345

κ4

f xi + c4 h, yi + h a4, 1 κ 1 + a4, 2 κ 2 + a4, 3 κ 3

17 346

where

and – in full agreement with Eqs. (17.334), (17.339), and (17.341); therefore, a total of 13 coefficients are to be determined, i.e. a2,1, a3,1, a3,2, a4,1, a4,2, a4,3, b1, b2, b3, b4, c2, c3, and c4. According to Eq. (17.340), c2 = a2, 1 ,

17 347

c3 = a3, 1 + a3, 2 ,

17 348

Table 17.4 List of order of accuracy, p, and associated minimum number of stages, smin, in Runge and Kutta’s methods. Order, p

1

2

3

4

5

6

7

8

9

10

Minimum number of stages, smin

1

2

3

4

6

7

9

11

12 ≤ smin ≤ 17

13 ≤ smin ≤ 17

795

796

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

and c4 = a4, 1 + a4, 2 + a4, 3

17 349

– where Eq. (17.339) was already taken into account; hence, the overall number of unknowns has been reduced to 10. In order to produce a balanced staging within each step, one may postulate c2 =

1 2

17 350

that enforces a2, 1 =

1 2

17 351

as per Eq. (17.347); as well as 1 2

17 352

c4 = 1

17 353

c3 = and

One may further postulate 1 a3, 2 = , 2

17 354

which implies a3, 1 = 0

17 355

upon combination with Eqs. (17.348), (17.352), and (17.354); and finally set a4, 2 = 0

17 356

a4, 3 = 1,

17 357

and

which permit a4, 1 = 0

17 358

be obtained from Eqs. (17.349) and (17.353). Although being the result of an arbitrary choice, the settings a3,1 = a4,1 = a4,2 = 0 greatly simplify derivation of the fourth-order method under scrutiny – since κ 3 ≡ κ 3{κ 2} and κ4 ≡ κ 4{κ3}, besides κ 2 ≡ κ 2{κ 1} as per Eqs. (17.344)–(17.346). The settings labeled as Eqs. (17.350), (17.352)–(17.354), (17.356), and (17.357) are thus far from unique – yet they are somehow required, otherwise the problem would remain undetermined. The outstanding 4 degrees of freedom, associated with b1, b2, b3, and b4, can now be removed based on Taylor’s expansion, by enforcing a nil error up to terms in h5 – thus generating exactly the four extra equations necessary (pertaining to terms in h, h2, h3, and h4). As will be seen shortly, aj,j−1 = cj – so all κj’s end up sharing the general form κj

f x + cj h, y + cj hκ j −1 ; j = 1,2, 3, 4

17 359

Numerical Approaches to Integration

for the fourth-order Runge and Kutta’s method – with Eq. (17.343) conveying a trivial case; hence, Taylor’s bivariate linear expansion of κj should read κ j ≈ f x, y +

∂f x, y ∂x

x + cj h −x +

∂f x,y ∂y

= f x, y + cj h

∂f x, y ∂f x, y + cj h κ j− 1 ∂x ∂y

= f x, y + cj h

∂f x, y ∂f x, y + κj −1 ∂x ∂y

y + cj hκj −1 −y ,

17 360

along with cancellation of symmetrical terms followed by factoring out of cjh. For j = 2, one obtains κ2 = f x, y + c2 h

∂f x, y ∂f x, y + κ1 ∂x ∂y

= f x, y + c2 h

∂f x, y ∂f x, y + f x,y ∂x ∂y

= f x, y + c2 h

∂f x, y ∂f x, y dy + ∂x ∂y dx

17 361

from Eqs. (17.320) and (17.360), where Eq. (17.232) was meanwhile taken into account – or else κ 2 = f x, y + c2 h

df x, y dκ1 = f x, y + c2 h dx dx

17 362

as per the chain partial differentiation rule entailed by Eq. (17.310), coupled again with the definition of κ 1 as per Eq. (17.320). For the subsequent stages, one may use Eqs. (17.320) and (17.361) as template to write κ j = f x, y + cj h

∂κ j −1 ∂x

+ x, y

∂κj −1 ∂y

f x + cj −1 h, y + cj− 1 hκ j−2

,

17 363

x, y

with the aid also of Eq. (17.359); under this model, y{x} is assumed to behave linearly, so dy f should remain constant within interval [x + cj−1h, x + cjh]. By the same token, one dx may claim that f x + cj −1 h, y + cj− 1 hκj −2 ≈

dy dx

17 364

– which supports transformation of Eq. (17.363) to κ j ≈ f x, y + cj h

∂κ j −1 ∂x

+ x, y

∂κ j−1 ∂y

dy ; dx x, y

17 365

Eq. (17.365) may finally be reformulated as κ j = f x, y + cj h

dκ j− 1 ; j = 2, 3,4, dx

17 366

797

798

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

again at the expense of the chain partial differentiation rule – while providing a simpler notation for Taylor’s bivariate expansion of κj. One should now depart from Eqs. (17.350) and (17.351) to rewrite Eq. (17.344) as 1 1 κ2 = f x + h, y + hκ 1 ; 2 2

17 367

since a2,1 = c2 in view of Eq. (17.347), one may resort to Eq. (17.359) or, equivalently, write κ2 = f x, y +

h dκ1 h df x, y = f x, y + 2 dx 2 dx

17 368

based directly on Eq. (17.366) after taking Eqs. (17.343) and (17.351) on board – while simplifying notation to merely x instead of xi, as well as y en lieu of yi. By the same token, one may write 1 1 κ3 = f x + h, y + hκ 2 , 2 2

17 369

upon combination of Eqs. (17.345), (17.352), (17.354), and (17.355); after resorting to Eq. (17.366), a3,2 = c3 as per Eqs. (17.352) and (17.354) assures validity of Eq. (17.359), so one finds κ3 = f x, y +

h dκ2 hd h df x,y f x, y + = f x, y + 2 dx 2 dx 2 dx (17.370)

h df x, y h2 d 2 f x, y + = f x, y + 2 dx 4 dx2

with the aid of Eq. (17.368), complemented by the rule of differentiation of a sum of functions. With regard to κ 4, one may retrieve Eq. (17.346) as κ4 = f x + h, y + hκ3

17 371

in view of Eqs. (17.353) and (17.356)–(17.358), now with a4,3 = c4 for validity of Eq. (17.359) as departure point; inspection of Eqs. (17.367), (17.369), and (17.371) confirms the functional form of Eq. (17.359). One may thus jump immediately to Eq. (17.365), and state κ 4 = f x, y + h = f x,y + h

dκ3 d h df x,y h2 d 2 f x,y f x, y + = f x, y + h + dx 2 dx dx 4 dx2 df x, y h2 d2 f x, y h3 d 3 f x,y + + 2 dx2 4 dx3 dx 17 372

with the aid of Eqs. (17.353) and (17.370), coupled with Eq. (10.122) and the definition of third-order derivative. One is finally in a position to revisit Eq. (17.342) together with Eqs. (17.320), (17.368), (17.370), and (17.372), and accordingly obtain b1 f x, y + b2 f x, y + yi + 1 = yi + h + b3 f x,y +

h df x,y 2 dx

h df x, y h2 d 2 f x,y + 4 dx2 2 dx

+ b4 f x,y + h

df x, y h2 d 2 f x,y h3 d3 f x, y + + 2 dx 2 dx 4 dx3

;

17 373

Numerical Approaches to Integration

condensation of terms alike yields b1 + b2 + b3 + b4 f x, y +

b2 b3 df x,y + + b4 h 2 2 dx

yi + 1 − yi = h +

b3 b4 2 d 2 f x, y b4 d 3 f x,y + + h3 h 2 4 2 4 dx dx3

b2 b3 df x,y + + b4 h 2 = b1 + b2 + b3 + b4 hf x, y + 2 2 dx +

,

17 374

b3 b4 3 d 2 f x, y b4 4 d 3 f x, y h + h + 4 2 4 dx2 dx3

along with elimination of the outer parenthesis. On the other hand, one may express y{x + h} via Taylor’s expansion around y{x}, according to y x+h =y x +

dy d 2 y h2 d 3 y h3 d 4 y h4 h + 2 + 3 + 4 + O h5 , dx dx 2 dx 3 dx 4

17 375

with truncation after the fourth-order term; recalling Eq. (17.232) and simplifying notation, Eq. (17.375) becomes 1 df x, y 1 d 2 f x,y 1 4 d 3 f x,y h y x + h − y x = hf x, y + h2 + h3 + , 2 6 24 dx dx2 dx3 17 376 where the definition of factorial was also taken into account. Unlike previously done with linear approximants being utilized within each of four sequential stages, y{x + h} is now undergoing expansion about y{x} only (i.e. the lower bound of the whole interval constituted by said four stages); this is why it is more reasonable to increase the degree of accuracy of Taylor’s expansion, by including also second- and third-order derivatives. Since yi+1 and y{x + h} are equivalent to each other, as well as yi and y{x}, they may be eliminated between Eqs. (17.374) and (17.376) and consequenty enforce b1 + b2 + b3 + b4 = 1,

17 377

1 1 1 b2 + b 3 + b 4 = , 2 2 2

17 378

1 1 1 b3 + b 4 = , 4 2 6

17 379

1 1 b4 = 4 24

17 380

and

799

800

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– thus guaranteeing equality of the corresponding right-hand sides, irrespective of h, df x, y d 2 f x, y d 3 f x, y f {x, y}, , , and . Equation (17.380) trivially yields 2 dx dx dx3 1 b4 = , 17 381 6 which can be inserted in Eq. (17.379) to get 1 11 1 b3 + = ; 4 26 6

17 382

Eq. (17.382) is equivalent to 1 1 2−1 − 4 1 = , 17 383 b3 = 6 12 = 12 = 1 1 12 3 4 4 upon isolation of b3. Equations (17.381) and (17.383) may then be inserted in Eq. (17.378), to convert it to 1 11 1 1 b2 + + = 2 23 6 2 that generates

17 384

1 1 1 1 1 3 −2 − − − 2 1 17 385 b2 = 2 6 6 = 2 3 = 6 = = ; 1 1 1 6 3 2 2 2 while combination with Eqs. (17.381), (17.383), and (17.385) transforms Eq. (17.377) to 1 1 1 + + =1 3 3 6 or, equivalently, b1 +

17 386

1 1 1 6 − 2 −2 − 1 1 = 17 387 b1 = 1 − − − = 3 3 6 6 6 After bringing Eqs. (17.339), (17.341), (17.350)–(17.358), (17.381), (17.383), (17.385), and (17.387) together, one obtains the associated Butcher’s tableau, viz. 0 1 2 1 B4 = 2 1

0 0 0 1 0 0 2 1 0 0 2 0 0 1 1 1 1 6 3 3

0 0 0 ,

17 388

0 1 6

using Eq. (17.335) as template; therefore, Eq. (17.342) takes the final form yi + 1 = yi + h

κ1 κ2 κ3 κ4 , + + + 6 3 3 6

upon direct utilization of Eqs. (17.381), (17.383), (17.385), and (17.387).

17 389

Numerical Approaches to Integration

There are countless possibilities for Runge and Kutta’s fourth-order methods, because 10 degrees of freedom existed from the very beginning, after consideration of Eqs. (17.347)–(17.349); if one sets instead c2 =

1 3

17 390

and 2 c3 = , 3

17 391

while retaining Eq. (17.353) for c4 – and further sets a3, 2 = 1,

17 392

while retaining a4,2 and a4,3 as per Eqs. (17.356) and (17.357), then one obtains

B3 = 8

0

0

0 0 0

1 3

1 3

0 0 0

2 1 − 1 0 0 3 3 1

0

0 1 0

1 8

3 3 1 8 8 8

17 393

as alternative Butcher’s tableau. Note that Eqs. (17.343)–(17.346) still apply, but not Eqs. (17.377)–(17.380) – so the b’s are obtained via a distinct set of four algebraic equations, thus justifying the differences between B4 and B3/8 as given by Eqs. (17.388) and (17.393), respectively. Equation (17.393) describes the so-called 3/8 rule – which holds as primary advantage the fact that almost all error coefficients are smaller; however, it requires more floating-point operations because κ j ≡ κ j{κj − 1, κ j − 2, …, κ1} – unlike Eqs. (17.367), (17.369), and (17.371), where κj ≡ κ j{κ j − 1}.

17.2.4

Integral Versus Differential Equation

As pointed out above, there is a close connection between numerical calculation of (definite) integrals and numerical integration of (first-order ordinary) differential equations; in fact, one may retrieve Eq. (17.232), separate variables and (symbolically) integrate between x and x + h – and thus between y{x} and y{x + h}, to get y x+h

x+h

dζ = y x

f ξ, y ξ

dξ,

y x+h

x+h

17 394

x

or, equivalently, y x + h −y x = ζ

y x

with the aid of Eq. (11.160).

= x

f ξ, y ξ



17 395

801

802

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

A number of quadrature methods to numerically calculate the outstanding integral in Eq. (17.395) have been reviewed above; for instance, the initial endpoint rectangular method labeled as Eq. (17.11) yields x+h

f ξ, y ξ

dξ =

x + h −x f x, y x

= hf x, y x

17 396

x

Inspection of Eq. (17.396) vis-à-vis with Eq. (17.246) indicates coincidence between the initial endpoint method of quadrature and Euler’s method of solution of ordinary differential equations. Insertion of Eq. (17.396) transforms Eq. (17.395) to y x + h − y x = hf x, y x

17 397

or, equivalently, yi + 1 − yi = hf xi , yi

17 398

based on the convention of nomenclature chosen so far; on the other hand, s = 1 transforms Eq. (17.265) to 1

yi + 1 +

1

a1−j yi + 1 −j = h j=1

b1− j f xi + 1−j , yi + 1−j

17 399

j=0

that breaks down to just yi + 1 + a1− 1 yi + 1−1 = h b1 −0 f xi + 1−0 , yi + 1− 0 + b1−1 f xi + 1−1 ,yi + 1 −1

17 400

– or else yi + 1 + a0 yi = h b1 f xi + 1 , yi + 1 + b0 f xi , yi

17 401

According to the first row of Table 17.2, one finds that a0 = a1−1 = −1, b1 = b1−0 = 0, and b0 = b1−1 = 1 – which, upon insertion in Eq. (17.401), give rise to yi + 1 + − 1 yi = yi + 1 −yi = h 0 f xi + 1 , yi + 1 + 1 f xi ,yi

= hf xi ,yi

17 402

that coincides with Eq. (17.398). Therefore, the 1-step Adams and Bashforth’s method degenerates to (single-stage) Euler’s method. If, on the other hand, b1 = 1 and b2 = 0, then Eq. (17.326) becomes y x + h = y x + 1hf x, y + 0h f x, y + c2 h

∂f x,y ∂f x,y + a2, 1 hf x,y ∂x ∂y 17 403

as per the first equality in Eq. (17.320) – which degenerates once more to Euler’s method as per Eq. (17.246); however, b2c2 = 0 ½ and a2,1b2 = 0 ½ as requested by Eqs. (17.330) and (17.331), respectively, so the outcome does not exhibit second-order accuracy as conveyed by Eq. (17.328). Consider now the trapezoidal rule as per Eq. (17.68) – again with the hypothesis of identical spacing between consecutive points, xi and xi+1; one may thus write x+h x

f ξ, y ξ

dξ =

x + h −x f x, y x 2

1 = h f x, y x 2

+ f x + h, y x + h

1 + f x + h, y x + h 2

17 404

Numerical Approaches to Integration

After insertion in Eq. (17.401) of the second row of Table 17.3, i.e. a0 = a1−1 = −1, b1 = b1−0 = ½, and b0 = b1−1 = ½, one gets yi + 1 + −1 yi = h

1 1 f xi + 1 , yi + 1 + f xi ,yi 2 2

;

17 405 x+h

since yi+1 − yi matches y{x + h} − y{x}, and consequently x f ξ, y ξ dξ due to Eq. (17.395) – corresponding to s = 1 associated with Eq. (17.401), one concludes from Eqs. (17.404) and (17.405) that the trapezoidal rule of quadrature is but Adams and Moulton’s 1-step (implicit) method of integration of an ordinary differential equation. If yi+1 in the argument of f {xi+1,yi+1} is replaced by the value of yi + hf{xi,yi} conveyed by Euler’s method labeled as Eq. (17.246), then Eq. (17.405) can be transformed to yi + 1 = yi + h

1 f xi + 1 , yi + hf xi , yi 2

+

1 f xi ,yi 2

17 406

1 1 f xi , yi + f xi + h yi + hf xi yi = yi + h 2 2

– which coincides with Eq. (17.318), after realizing that xi+1 = xi + h = x + h, yi+1 = y{x + h}, and yi = y{x}; therefore, the trapezoidal rule of calculation of definite integrals leads to Runge and Kutta’s second-order method of integration of an ordinary differential equation, once combined with Euler’s method of quadrature. One may instead revisit Eq. (17.265) with s = 0, viz. 0

yi + 0 +

0

a0− j yi + 0− j = h j=1

b0−j f xi + 0− j ,yi + 0− j ,

17 407

j=0

thus leading to the unique situation portrayed as yi + a − j yi−j

j=1

17 408

= hb0− 0 f xi + 0− 0 , yi + 0−0

– which degenerates to yi + a − 1 yi −1 = hb0 f xi , yi ;

17 409

when the first row of Table 17.3 is taken into account, i.e. a−1 = a0−1 = −1 and b0 = b0−0 = 1, Eq. (17.409) turns to yi −yi−1 = hf xi , yi

17 410

After replacing dummy (counting) variable i by i + 1 in Eq. (17.410), one gets yi + 1 − yi = hf xi + 1 , yi + 1

17 411

that mimics Eq. (17.255) – so Adam and Moulton’s 1-step (implicit) method for integration of ordinary differential equations is but Euler’s implicit method of quadrature. Recall the midpoint integration rule, labeled as Eq. (17.13); one may revisit it here as x+h

f ξ, y ξ

x + h −x f

dξ =

x

hf

x+ x+h 2

h h x+ ,y x+ 2 2

= hf

2x + h h = hf x + 2 2 17 412

803

804

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

upon straightforward algebraic rearrangement – and since f x h 2, y

x + h2

f x,y x

x + h2

h 2

coincides with f x + x+ . Based on Taylor’s expansion of y{x + h/2} around x, truncated after the linear term, one finds that y x+

h dy h , ≈y x + 2 dx x 2

17 413

h h = y x + f x, y 2 2

17 414

or else y x+

in view of Eq. (17.232); insertion of Eq. (17.414) supports transformation of Eq. (17.412) to x+h

f ξ, y ξ

x

h h dξ = hf x + , y x + f x, y 2 2

,

17 415

whereas combination with Eq. (17.395) unfolds, in turn, h h y x + h = y x + hf x + , y x + f x, y 2 2 – with replacement of h by an equivalent auxiliary variable H

17 416 h 2

leading to

1 1 y x + 2H = y x + 2Hf x + 2H, y x + 2Hf x, y 2 2

17 417

A functional form similar to Eqs. (17.320)–(17.322) has just been delivered by Eq. (17.417) – provided that b1 = 0 and b2 = 1 in Eq. (17.322) satisfy Eq. (17.329), besides c2 = ½ and a2,1 = ½ as per Eq. (17.323) satisfying Eqs. (17.330) and (17.331). Therefore, the midpoint integration rule of quadrature coincides with Runge and Kutta’s second-order method, described by 0 0 0 1 1 0 B2# = 2 2 0 1

17 418

in terms of Butcher’s tableau, instead of 0 0 0 B2 = 1 1 0 1 1 2 2

17 419

associated with Eq. (17.319). One may finally refer to Simpson’s rule to write x+h x

f ξ, y ξ

dξ =

x + h −x x+ x+h f x + 4f 6 2

+f x+h

,

17 420

Numerical Approaches to Integration

based on Eq. (17.124); after lumping terms in x and factoring 6 in, Eq. (17.420) becomes x+h

f ξ, y ξ

dξ = h

x

=h

1 2 2x + h 1 f x + f + f x+h 6 3 2 6 1 f x, y x 6

+

2 h h f x+ ,y x+ 3 2 2

+

1 f x + h, y x + h 6 17 421

– once again pending realization that f being explicit on x and y{x} implies f ≡ f{x}, and vice versa. One may now proceed to Taylor’s linear expansion of y{x + h/2} and y{x + h} around x, according to Eq. (17.413) and y x + h ≈y x +

dy h, dx x

17 422

respectively; replacement of dy/dx by f {x,y}, in agreement with Eq. (17.232), leads to Eq. (17.414) and y x + h = y x + hf x, y ,

17 423

respectively. Insertion of Eqs. (17.414) and (17.423) is then in order, since it allows transformation of Eq. (17.421) to x+h

f ξ, y ξ

x

1 2 h h f x, y x + f x + , y x + f x, y 6 3 2 2 dξ = h 1 + f x + h, y x + hf x, y 6

;

17 424 after recalling Eq. (17.320) that defines κ1, one may set s = 2 and c2 = a2,1 = ½ in Eq. (17.334) to get κ2

1 1 f x + h, y + h κ1 , 2 2

17 425

with the aid also of Eq. (17.339) to justify a2,2 = 0 – while s = 3, c3 = a3,1 = ½, and a3,2 = 0, further to a3,3 = 0 as per Eq. (17.339), lead to κ3

1 1 f x + h, y + h κ1 = κ 2 2 2

17 426

analogous to Eq. (17.425), and likewise κ4

f x + h, y + hκ1

17 427

can be written based again on Eq. (17.334) for s = 4 and Eq. (17.339), but after setting c4 = a4,1 = 1 and a4,2 = a4,3 = 0. A final setting of 2b1 = b2 = b3 = 2b4 = 1/3 permits Eq. (17.333) be rewritten as yi + 1 = yi + h

1 1 1 1 κ1 + κ2 + κ3 + κ4 6 3 3 6

17 428

805

806

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

for s = 4, where insertion of Eq. (17.426) produces yi + 1 = yi + h

1 1 1 1 1 2 1 κ 1 + κ 2 + κ 2 + κ 4 = yi + h κ 1 + κ 2 + κ 4 6 3 3 6 6 3 6

17 429

via collapse of similar terms; since the functional form of Eq. (17.429) matches that of Eq. (17.424), with the aid of Eqs. (17.320) and (17.425)–(17.427), one concludes that Simpson’s rule of quadrature is equivalent to the modified fourth-order method – characterized by 0 1 2 1 B4 = 2 1

0 1 2 1 2 1 1 6

0 0 0 0 0 0 0 0 0

17 430

0 0 0 2 1 0 3 6

for Butcher’s tableau, where Eq. (17.341) was also brought on board. This tableau is distinct from that associated with the standard fourth-order Runge and Kutta’s method as per Eq. (17.388) – in terms of b and A, but identical in terms of c; however, it still belongs to the very same family of numerical methods, designed for solution of ordinary differential equations.

807

Part 3 Basic Concepts of Statistics

Although this may seem a paradox, all exact science is dominated by the idea of approximation. Bertrand Russel

809

18 Continuous Probability Functions

A random variable is used to express the results of a random experiment – where a sample, of a given size, is taken as representative of an original population. A random variable is often denoted by a capital letter, say, X, whereas a specific value taken thereby is denoted by the corresponding lowercase letter, x; the probability of occurrence of such a value is, in turn, denoted by P. Continuous distributions of probability are of particular interest in engineering practice – and may appear as a probability density function, say, D{X}, or in its cumulative version, i.e. P{X < x}; in other words, dP x < X < x + dx

D X

x

18 1

dx

or, equivalently, D X =

dP dx

18 2

will serve as definition of D{X} – while P

x

dP =

P X x = 1−P X < x ,

18 6

at the expense of Eqs. (18.3) and (18.4). Gauss’ normal distribution has become the universal reference for classical statistics – not only due to its intrinsic adherence to physical (experimental) evidence, but also because it represents the asymptotic limit for all other continuous distributions. Besides the directly related lognormal distribution, Student’s t-distribution, χ 2-distribution, and Fisher’s F-distribution play also quite relevant roles in practice – and may be mathematically derived from said normal distribution, under specific sets of assumptions. Before addressing each such distribution in detail, the most common statistical descriptors will be introduced – to facilitate understanding and analysis thereafter.

18.1

Basic Statistical Descriptors

There are several relevant statistical descriptors of a random variable; the simplest is its expected value – denoted as E{X}, and defined as Avg X

E X



μ

−∞

18 7

xD x dx;

hence, E{X} turns, in essence, to a location indicator. After retrieving Eq. (18.4), one may redo Eq. (18.7) as ∞

E X =

−∞



xD x dx =

1

−∞ ∞

xD x dx

−∞

Avg X ;

18 8

D x dx

this justifies why the expected value of X is often referred to as average, Avg, value of X, as already emphasized in Eq. (18.7), or else as mean, μ; remember Lagrange’s theorem in this particular, with μ playing the role of c in Eq. (10.274). It is instructive to pick up a point x = η of the domain of D{X}, and then enforce the condition that such a point minimizes the square of the distance to every other point throughout the entirety of that domain – weighed by the underlying probability density; the associated necessary condition reads ∂ ∂η

∞ −∞

x− η 2 D x dx = 0

18 9

The rule of differentiation of an integral allows one to write ∞ −∞

2 x −η −1 D x dx = −2

∞ −∞

x− η D x dx = 0,

18 10

because η appears explicitly only in the kernel of Eq. (18.9); elimination of the constant term prior to the integral in view of the nil right-hand side, followed by decomposition of the integral as per Eq. (11.124) yield ∞ −∞

xD x dx −η

∞ −∞

D x dx = 0,

18 11

Continuous Probability Functions

with advantage also taken of the constancy of η. Isolation of η in Eq. (18.11) leads to ∞

η=

−∞ ∞

xD x dx Avg X ,

−∞

18 12

D x dx

where Eq. (18.8) was taken into account. A further differentiation of the left-hand side of Eq. (18.9) unfolds ∂2 ∂η2

∞ −∞

∂ −2 ∂η

x− η 2 D x dx

∞ −∞

x− η D x dx = − 2 −1

∞ −∞

D x dx = 2, 18 13

with the aid of Eqs. (18.4) and (18.10) – so one promptly realizes that ∂2 ∂η2



−∞

x−η 2 D x dx

= 2 > 0,

18 14

∂ ∞ 2 ∂η − ∞ x −η D x dx = 0

as typical descriptor of a minimum; therefore, E{X} as per Eq. (18.7) is indeed the value of x for which the square distance to all other points is minimized – taking their frequency of appearance into account. A second relevant statistical descriptor is the variance of X – denoted as Var{X}, also known as second-order centered moment; its exact definition is Var X

E X −E X

2



σ2

−∞

x−E X

2



D x dx

−∞

x − μ 2 D x dx;

18 15 hence, it basically stands as a dispersion indicator. Sometimes, the square root of Var{X} is preferred, viz. Std X

Var X

σ

18 16

– where Std{X} is usually termed standard deviation of X; note its dimensions, similar to those of E{X} (or X, for that matter). A third relevant descriptor is skewness of X – denoted by Skw{X}, and also known as normalized third-order centered moment; it is defined as ∞

Skw X

E

X −E X Std 3 X

3

x− E X −∞

Var X



3

D x dx −∞

x−μ σ2

3 2

3

D x dx

18 17 at the expense of Eq. (18.16), and essentially entails an asymmetry indicator. Many other indicators may be postulated; these include the i-th order absolute moment, μi, abiding to μi

E Xi

∞ −∞

x i D x dx; i = 1, 2,…

18 18

811

812

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Comparison of Eq. (18.18) with Eq. (18.7) indicates that μ1

E X ,

18 19

so the expected value of X coincides with the first-order absolute moment thereof. Centered moments, μi,ctr, are likely to be of a greater interest, and abide to ∞

μi, ctr

x −E X

−∞

i

D x dx; i = 1,2, …;

18 20

see the functional form of the kernel of the integral in Eq. (18.20), vis-à-vis with that in Eq. (18.15). When i is set equal to unity, Eq. (18.20) degenerates to μ1, ctr =





−∞ ∞

=

−∞

x− E X D x dx = xD x dx −E X

−∞

xD x dx −

∞ −∞

E X D x dx ,

∞ −∞

18 21

D x dx

after splitting the integral and realizing that E{X} is independent of the integration variable; insertion of Eqs. (18.4) and (18.7) produces the trivial result μ1, ctr = E X − E X 1 = E X − E X = 0,

18 22

as expected for a first order centered moment. If i = 2, then Eq. (18.20) becomes ∞

μ2, ctr

−∞

x−E X

2

D x dx

18 23

Var X

– describing the second-order centered moment, which coincides with the variance as given by Eq. (18.15); Newton’s binomial may be invoked to transform Eq. (18.23) to μ2, ctr =

∞ −∞

x2 − 2xE X + E 2 X

D x dx ,



=

−∞

x D x dx − 2 2

∞ −∞



xE X D x dx +

−∞

18 24

2

E X D x dx

where integral decomposition was performed once again. In view of the constancy of E{X}, and also of its square as enforced by Eq. (18.7), they may be taken off the corresponding kernels – so Eq. (18.24) transforms to μ2, ctr =

∞ −∞

x2 D x dx− 2E X

∞ −∞

xD x dx + E 2 X

∞ −∞

D x dx;

18 25

after recalling Eqs. (18.4) and (18.19), besides Eq. (18.18) for i = 2, one may write μ2, ctr = μ2 − 2μ1

∞ −∞

xD x dx + μ12 1,

18 26

whereas Eq. (18.7) permits further simplification to μ2, ctr = μ2 − 2μ1 μ1 + μ12 = μ2 − μ12

18 27

Continuous Probability Functions

Equation (18.27) is equivalent to Var X = μ2 − μ12 ,

18 28

after bringing Eq. (18.23) on board; therefore, variance may easily be expressed as a binomial combination of the first- and second-order absolute moments. In the case of i = 3, Newton’s binomial may be recalled once again to write ∞

μ3, ctr

−∞ ∞

=

−∞

3

x− E X

D x dx 18 29

, x −3x E X + 3xE X −E X D x dx 3

2

2

3

departing from Eq. (18.20); after subdividing the kernel as four terms, one gets ∞

μ3, ctr =

−∞

x3 D x dx −3





∞ −∞

x2 E X D x dx + 3

∞ −∞

xE 2 X D x dx 18 30

3

−∞

E X D x dx

– where factors independent of integration variable x may again be taken off the associated kernels as μ3, ctr =

∞ −∞

x3 D x dx −3E X

−E X 3

∞ −∞

x2 D x dx + 3E 2 X

∞ −∞

xD x dx

∞ −∞

18 31

D x dx

Upon combination with Eq. (18.18) for i = 3 and Eq. (18.19), one gets μ3, ctr = μ3 −3μ1

∞ −∞

x2 D x dx + 3μ12

∞ −∞

xD x dx −μ13

∞ −∞

D x dx

18 32

from Eq. (18.31), where a further insertion of Eqs. (18.4) and (18.7), as well as Eq. (18.18) for i = 2 leads to μ3, ctr = μ3 −3μ1 μ2 + 3μ12 μ1 − μ13 1 = μ3 −3μ1 μ2 −μ12 − μ13 ;

18 33

owing to Eqs. (18.15) and (18.28), one finally obtains μ3, ctr = μ3 −3μ1 σ 2 − μ13

18 34

After revisiting Eq. (18.17) as ∞

Skw X =

−∞

x −E X 3

Std X

3

D x dx =

μ3, ctr σ2

3 2

18 35

813

814

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– based on the constancy of Std{X} relative to the (dummy) variable of integration, coupled with Eqs. (18.16) and (18.29), one finally finds Skw X =

μ3 − 3μ1 σ 2 −μ13 σ3

18 36

at the expense of Eq. (18.34). Higher order moments, involving higher powers of X, are less interesting than lower moments, and they are to be used with caution – not only because such moments are harder to calculate, but also because their meaning is more difficult to grasp. The probability density functions of most continuous distributions of practical interest exhibit a maximum – with associated necessary condition reading ∂D x ∂x

= 0,

18 37

x = Mod

and sufficient condition given by ∂2 D x ∂x2

< 0;

18 38

x = Mod

the point where such a maximum is attained is termed mode, Mod – and is a useful descriptor in practice, because it is easily pinpointed by plain visual observation. A final descriptor of interest is the median, Med – defined as the value of x such that P X < Med = P X > Med ;

18 39

this means that the median divides the domain of D{X} in exactly half, in terms of (cumulative) probability of occurrence. Consider, in this regard, a point x = λ picked from the domain of D{X}, described by the condition of minimum (absolute) distance to every other point within said domain; the associated necessary condition looks like ∂ ∂λ



x−λ 2 D x dx =

−∞

∂ ∂λ

∞ −∞

X − λ D x dx = 0

18 40

where one should again resort to the first-order derivative – although λ will, in general, differ from η as given by Eq. (18.9). The integral in Eq. (18.40) may, for convenience, be split as two integrals – one corresponding to ]−∞, λ], i.e. the portion of the domain below λ (for which the modulus of x − λ is given by its negative), and the other corresponding to [λ,∞[, i.e. the portion of the domain above λ (for which |x − λ| coincides with its argument); this gives rise to ∂ ∂λ

λ −∞

∂ = ∂λ

− x− λ D x dx + λ −∞

∞ λ

∂ λ −x D x dx + ∂λ

x− λ D x dx ∞ λ

,

18 41

x −λ D x dx = 0

where the linearity of the differential operator was taken into account. Performance of the differentiation as stated transforms Eq. (18.41) to

Continuous Probability Functions λ −∞

D x dx + λ− x D x λ

=

−∞

dλ + x = λ dλ

D x dx + λ− λ D λ −

∞ λ

∞ λ

−1 D x dx − x−λ D x

dλ x = λ dλ

,

D x dx− λ −λ D λ = 0 18 42

in general agreement with Eq. (11.295) – since both the upper limit of the first integral and the lower limit of the second integral depend on λ, besides the kernel itself; Eq. (18.42) breaks down to λ −∞

D x dx −

∞ λ

D x dx = 0,

18 43

upon removal of nil terms. On the other hand, one may state that λ



−∞

D x dx +

λ



D x dx =

−∞

D x dx = 1,

18 44

in view of Eqs. (11.124) and (18.4), so ordered addition of the negative of Eq. (18.43) to Eq. (18.44) unfolds ∞

2

λ

D x dx = 1;

18 45

Eq. (18.45) is equivalent to ∞ λ

D x dx =

1 = 2

λ −∞

18 46

D x dx,

where Eq. (18.43) was also taken into account. In other words, λ should divide exactly the probability cumulative function P{X < x} in two portions – each with the same overall probability of ½. Since Eq. (18.46) mimics Eq. (18.39), one concludes that λ coincides with Med; furthermore, ∂2 ∂λ2

∞ −∞

x− λ D x dx =

∂ ∂λ

λ −∞



D x dx−

λ

D x dx

18 47

based on Eqs. (18.40) and (18.43) – where the derivative reduces, in this case, to ∂2 ∂λ2

∞ −∞

x− λ D x dx = D x

x=λ

dλ − −D x dλ

x=λ

dλ =D λ +D λ , dλ

18 48

= 2D λ > 0 since D{λ} is, by definition, positive. Equation (18.48) therefore confirms that the critical point λ abiding to Eq. (18.46) is actually a minimum; therefore, the median may be regarded as the point of the domain that minimizes the (absolute) distance to every other point of the domain – weighed by the corresponding probability density function.

18.2 Normal Distribution Among the many statistical distributions of the continuous type that can be devised (or have been studied to date), the normal distribution stands up on its own – due to its universal applicability to phenomena with associated intrinsic variability arising from

815

816

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

pure chance. It is usually applied when many elementary phenomena contribute additively – each in minor fashion and with random magnitude, to the observed (overall) result. If such contributions are of a multiplicative nature, then a lognormal distribution should be considered – corresponding to a normal distribution of the logarithms of the said elementary phenomena. If the size of the population/sample is large enough, then the actual distribution of data becomes immaterial – since the normal distribution will be asymptotically reached, as guaranteed by the central limit theorem. Fundamental derivation and justification of this seminal distribution will thus be presented below; a standard universal form will eventually be produced, for both density and cumulative probability functions, as well as the underlying moment-generating function – for a single population and for more than one population.

18.2.1

Derivation

Consider a sequence of happenings of a given event, subjected to a many large number of (essentially unknown) causative variables – which, for illustrative purposes, may occur on a Cartesian two-dimensional plane; for instance, throwing of darts onto a dartboard on the wall. Although aiming at the center, landing in a random relative position (x,y) will actually be observed – with deviations from the center independent of location/orientation of the coordinate system, errors in perpendicular directions independent of each other, and large deviations less likely than small deviations. In attempts to quantitate the above postulates, the probability of a dart falling in a horizontal strip, from x to x + dx, will be represented by P{x}dx; and P{y}dy will likewise represent the probability of a dart falling in a vertical strip, from y to y + dy. As a consequence of the aforementioned independence assumptions, the density of hits, D (still to be determined), of a dart in a region of elementary area dxdy should be given by the product of P{x}dx by P{ y}dy. Furthermore, the said density of hits is to be rotationally invariant, in agreement with the above orientation hypothesis; and should decay with distance, consistent with the above distance-dependent frequency assumption. In other words, the distribution of where a dart lands will depend on its distance to the center, r, but not on its angular position, θ, according to D r dxdy = P x dx P y dy,

18 49

or else D r =P x P y

18 50

after dropping dx and dy from both sides. Upon differentiation of both sides with regard to θ, Eq. (18.50) becomes ∂D r ∂P x ∂P y = P y +P x =0 ∂θ ∂θ ∂θ

18 51

that breaks down to P y

dP x ∂x dP y ∂y +P x =0 dx ∂θ dy ∂θ

18 52

via application of the chain (partial) differentiation rule; coupled with D{r} being obviously independent of θ, i.e. ∂D{r}/∂θ = 0. In view of the basic relationship between

Continuous Probability Functions

rectangular and polar coordinates as per Eq. (16.160) and (16.161), one may redo Eq. (18.52) to P y

dP x ∂ dP y ∂ r cos θ + P x r sin θ dx ∂θ dy ∂θ

dP x dP y = −P y r sin θ + P x r cos θ = 0 dx dy

18 53

– or, after retrieving the original polar coordinate form, xP x

dP y dP x −yP y =0 dy dx

18 54

Equation (18.54) may be transformed via division of its both sides by xyP{x}P{y}, viz. dP y dP x dy − dx = 0, yP y xP x

18 55

which is to hold irrespective of the actual values of x and y; this implies dP y dP x dx = dy xP x yP y

κ3 ,

18 56

where κ3 denotes some (yet unknown) constant – since the left-hand side is only a function of x, and the middle side is only a function of y. After recalling the equality linking left- and right-hand sides of Eq. (18.56), i.e. dP x dx = κ , 3 xP x

18 57

one may proceed to integration via separation of variables as dP x = κ 3 xdx P x

18 58

– which degenerates to x2 + κ0 , 18 59 2 where κ0 stands for an arbitrary integration constant; once exponentials are taken off both sides, Eq. (18.59) turns to ln P x = κ3

P x = exp κ0 + κ3

x2 x2 = e κ0 exp κ3 , 2 2

18 60

or else κ3 2 x 2 as long as constant κ1 abides to P x = κ 1 exp

18 61

817

818

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

e κ0

κ1

18 62

Since large deviations were hypothesized as less likely than small deviations, constant κ 3 in Eq. (18.61) must be negative – so one may to advantage redo the latter to P x = κ 1 exp −

κ2 2 x , 2

18 63

after setting κ2 = − κ3

18 64

and guaranteeing that κ 2 > 0. If integration departed instead from the second equality labeled as Eq. (18.56), a result identical to that conveyed by Eq. (18.63) would have been obtained – yet pertaining to y, i.e. P y = κ1 exp −

κ2 2 y , 2

18 65

thus implying D

= κ 1 exp −

D r x, y = κ12 exp −

κ2 2 κ2 κ2 κ2 x κ 1 exp − y2 = κ 12 exp − x2 − y2 2 2 2 2

κ2 2 2 x +y 2 18 66

upon insertion of Eqs. (18.63) and (18.65) in Eq. (18.50), followed by performance of elementary algebraic rearrangement. Note that Eq. (18.66) may be rephrased as D r

D

x2 + y2 = κ 12 exp −

κ2 2

x2 + y2

2

,

18 67

which reduces to D r = κ 12 exp −

κ2 2 r 2

18 68

owing to the definition of radial distance, i.e. r x2 + y2 . The functional form of Eq. (18.68) echoes the hypothesis that the aim for the darts reduces to a point of radial coordinate r = 0 – with r referring to the distance to said point; however, in the more realistic situation of the aim being characterized by (finite) radius m, Eq. (18.68) should be recoined as D r = κ 12 exp −

κ2 r −m 2

2

18 69

– where the distance of interest is now computed using the circle of (small) radius m as reference. Remember the intrinsic feature of a probability density function expressed by ∞ −∞

D r dr = 1,

consistent with Eq. (18.4) – or, equivalently,

18 70

Continuous Probability Functions

κ 12

∞ −∞

exp −

κ2 r −m 2

2

dr = 1

18 71

following insertion of Eq. (18.69); as will be proven in due course, ∞ −∞

exp −

ξ2 dξ = 2π 2

18 72

Comparative inspection of Eqs. (18.71) and (18.72) suggests definition of an auxiliary variable as κ2 r − m ,

ξ

18 73

and consequently lim ξ = −∞

18 74

lim ξ = ∞

18 75

r

−∞

and r



– in which case Eq. (18.72) becomes ∞

exp − −∞

κ 2 r −m 2

2

κ2 r −m

d

= 2π;

18 76

κ2 may now be taken off the differential to get ∞

κ2

−∞

exp −

κ2 r −m 2

2

dr = 2π,

18 77

where lumping of constants yields ∞ −∞

exp −

κ2 r −m 2

2

2π κ2

dr =

18 78

Insertion of the result conveyed by Eq. (18.78) allows transformation of Eq. (18.71) to κ 12

2π = 1, κ2

18 79

where isolation of κ 12 unfolds κ 12 =

κ2 ; 2π

18 80

Eq. (18.69) may accordingly be reformulated to D r =

κ2 κ2 r −m exp − 2π 2

2

,

18 81

where just one arbitrary constant remains. The mean μ may now be calculated via its definition as provided by Eq. (18.7), viz.

819

820

Mathematics for Enzyme Reaction Kinetics and Reactor Performance ∞

μ

−∞

rD r dr =

κ2 2π



=

κ2 2π



=

−∞

−∞

κ2 2π

∞ −∞

r exp −

r − m + m exp − r − m exp −

κ2 r −m 2

κ2 r −m 2

κ2 r −m 2

2

2

2

dr

dr

d r −m + m

κ2 2π

∞ −∞

exp −

κ2 r −m 2

2

dr

18 82 – upon insertion of Eq. (18.81), and realization that d(r − m) = dr and r − m ±∞ when ± ∞; and where addition and subtraction of m to the kernel, followed by splitting of r the resulting integral were meanwhile performed for mathematical convenience. Direct application of the fundamental theorem of integral calculus, coupled with Eq. (18.78) support simplification of Eq. (18.82) to κ2 κ 2 exp − 2 r − m −κ 2 2π

μ=



2

κ2 2π

+m −∞

2π κ2

18 83

or, equivalently, μ=

e−

2

−∞

−e −∞ 0−0 +m= +m=m 2πκ 2 2πκ 2 2

18 84

Since m represents the mean as per Eq. (18.84), a more meaningful form of Eq. (18.81) looks like κ2 κ2 r −μ exp − 2π 2

D r =

2

;

18 85

calculation of the variance of D{r} may now proceed via ∞

σ2

−∞

=

κ2 2π

r −μ 2 D r dr =

∞ −∞

r −μ

2

κ2 κ2 r −μ exp − 2π 2

2

18 86



κ2 ζ ζexp − ζ 2 2 −∞

dr



as suggested by Eq. (18.15) – where auxiliary variable ζ, defined as r − μ,

ζ

18 87

was introduced to advantage, in view of dζ = dr and ζ|r transforms Eq. (18.86) to

σ = 2

κ2 ζ 2π

κ2 2 ζ 2 −κ 2

exp −





− −∞

−∞

±∞

= ±∞. Integration by parts

κ2 2 ζ 2 dζ , − κ2

exp −

18 88

Continuous Probability Functions

which turns to ∞ κ2 ζ ζ κ2 − + exp − ζ2 dζ 18 89 κ κ 2 2 2π exp 2 ζ2 2 −∞ exp ζ 2 2 −∞ ∞ after factoring 1/κ2 out. The unknown quantities arising from straightforward application of classical theorems on limits to the first two terms in parenthesis may be circumvented by resorting to l’Hôpital’s rule, according to

σ2 =

σ2 =

1 κ2

1 2πκ2

1 κ2 κ2 2 2ζexp ζ 2 2

=

1 1 2πκ 2 κ 2 −∞ e

=

1 2πκ 2

2π = κ2

−∞

1 κ 22

=

2

− κ2 −∞



1

κ2 2 2ζexp ζ 2 2

1 + κ 2 ∞e∞2

2π = κ2

2π κ2

+ ∞

1 0− 0 + 2πκ 2

2π , κ2

1 κ2 18 90

with the result conveyed by Eq. (18.78) taken advantage of once more, after setting ζ ≡ r − m (and thus dζ = dr); this eventually leads to 1 18 91 σ2 upon solving for κ2. Combination with Eq. (18.91) allows final transformation of Eq. (18.85) to κ2 =

D r =

1 1 2 σ exp − σ 2 r − μ 2π 2

2

=

1 1 r −μ exp − 2πσ 2 2 σ2

2

18 92

or else D r =

1 1 r −μ exp − 2 σ 2πσ

2

18 93

after straightforward algebraic simplification; Eq. (18.93) is known as Gauss’ distribution – in honor to German mathematician and physicist Johann C. F. Gauss, who lived in the nineteenth century. 18.2.2

Justification

When a probability is to be assigned to an event, and there is in principle no reason for one outcome to occur more often than any other, then events should be assigned identical probabilities; this is called the principle of insufficient reason, or principle of indifference – and dates back to the eighteenth century, by the hand of influential French scholar Pierre-Simon, marquis de Laplace. If extra information is (or becomes) available during an experimental process about the nonuniformity of the outcomes that suggests a

821

822

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

certain class of probability distribution, then the principle of maximum entropy – an extension of the principle of insufficient reason, indicates how to proceed; in simple terms, if little is known about a probability distribution except that it putatively belongs to a certain class, then the distribution with the largest entropy should be chosen by default. The underlying rationale is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physicochemical systems tend to move toward maximal entropy configurations over time, as per the second law of thermodynamics. More specifically, the entropy, H{X}, of a random variable, X, characterized by a continuous probability density function, D{X}, is defined as ∞

H X =−

−∞

D x ln D x dx,

18 94

according to the concept pioneered by Claude E. Shannon and Warren Weaver in 1949. This definition is but the continuous analogue of the formula for entropy put forward by Josiah W. Gibbs (an American scientist) in 1878; he claimed that entropy, S, associated with the macroscopic state of a classical thermodynamic system, i.e. a collection of discrete particles taking a discrete set of N nonequiprobable microstates, reads N

S = − kb

Pi ln Pi

18 95

i=1

– where Pi denotes probability of i-th microstate and kb denotes Boltzmann’s constant. Physically speaking, systems are expected to evolve into states of higher entropy as they approach equilibrium; H{X} may thus be viewed as a measure of the information carried by X – with higher entropy meaning less information, i.e. more uncertainty or more severe lack of information. Using Eq. (18.94) as template, the entropy of the normal distribution, with mean μ and variance σ 2, will be given by ∞ 1 x −μ 2 exp − 1 x −μ 2 2 σ −ln 2πσ − dx, 18 96 HN X = − 2 σ 2πσ −∞

where Eq. (18.93) was retrieved and logarithms taken of both its sides; after setting x−μ 18 97 y σ as auxiliary variable – which implies dx dy = 18 98 σ and lim y = ± ∞, 18 99 x ±∞

Eq. (18.96) becomes ∞

HN X = −∞

=

1 ln 2πσ + y2 2 exp − 1 y2 σdy 2 2πσ

ln 2πσ 2π

∞ −∞

exp −

y2 1 dy− 2 2 2π

,

∞ −∞

y − y exp −

y2 2

dy 18 100

Continuous Probability Functions

where σ was dropped from both numerator and denominator of the kernel, constant factors were taken off the kernel, and the resulting integral was splitted as appropriate. Insertion of Eq. (18.72), coupled with the rule of integration by parts permit simplification of Eq. (18.100) to HN X =

ln 2πσ 1 y2 y exp − 2π − 2 2π 2 2π

∞ −∞



∞ −∞

exp −

y2 dy ; 2 18 101

after realizing that blind application of the classical theorems on limits would produce lim y exp −

x ±∞

y2 = lim x ±∞ 2

y y2 exp 2

= lim

x ±∞

±∞ ∞

18 102

that is but an unknown quantity, one should resort to l’Hôpital’s rule to instead get lim y exp −

y ±∞

y2 = lim y ±∞ 2

1 1 = lim y2 2y y ± ∞ y2 exp y exp 2 2 2 1 1 =0 = 2 = ±∞ ∞ ± ∞e ± ∞

18 103

Equation (18.103) permits simplification of Eq. (18.101) to ∞ 1 y2 dy exp − 2 2 2π − ∞ , 1 1 1 1 2 2 2π = ln 2πσ + = ln 2πσ + 2 2 2 2 2π

HN X = ln 2πσ 2 +

18 104

where Eq. (18.72) was again utilized, and 2π cancelled out between numerator and denominator – together with application of the operation features of a logarithm; one finally gets HN X =

1 1+ ln 2πσ 2 , 2

18 105

after factoring ½ out. Note that the mean μ does not enter the final formula for HN{X}, as conveyed by Eq. (18.105) – so all normal distributions with similar variance share the same entropy; this was somewhat expected, because entropy is associated with random events (or mobility) around the mean (or center of mass) – described by σ 2, rather than μ. The principle of maximum entropy as a method for statistical inference – originally formulated in 1957 by Edwin T. Jaynes (an American physicist, known for his work on statistical mechanics), indicates that when seeking for a probability density function subjected to certain constraints (e.g. given μ and σ 2), the function satisfying those constraints that exhibits the highest entropy should be selected. The basic idea behind this principle entails choice of a probability density function consistent with prior knowledge, but introducing no unwarranted information; hence, any probability density function satisfying given constraints that has smaller entropy will necessarily contain more information (i.e. less uncertainty), so it will claim something stronger that can actually be assumed. Stated differently, the probability density function with maximum entropy (and satisfying whatever constraints are imposed a priori) should turn the least surprising

823

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in terms of its predictions. Therefore, the principle of maximum entropy guides one to the best probability distribution reflecting current knowledge, while indicating what to do if experimental data do not agree with predictions based on the distribution chosen: understand why the phenomenon under scrutiny behaves in an unexpected way (i.e. find a previously unsuspected constraint), and maximize entropy over the distributions that satisfy all constraints already identified (including the new one). A proper appreciation of the principle of maximum entropy actually reflects a certain attitude about interpretation of probability distributions – viewed either as a predictor of frequencies of outcomes over repeated trials, or as a quantitative measure of plausibility that some individual situation develops under certain conditions. Sometimes the former (or frequency) point of view is meaningless, so only the latter (subjective) interpretation of probability will make sense. Before proceeding any further, it is convenient to realize that y −y ln y ≤ x− y ln x,

18 106

for any x > 0 and y > 0 – with the equality sign holding if and only if x = y; in fact, one gets y ln x− y ln y ≤ x− y

18 107

after isolation of the two logarithms in the left-hand side of Eq. (18.106), or else x y ln ≤ x− y 18 108 y upon factoring out y in the left-hand side, complemented with the operational features of a logarithmic function. Division of both sides by y > 0 transforms Eq. (18.108) to x x ln ≤ − 1; y y

18 109

the curves representing ln{x/y} and x/y − 1 are plotted in Fig. 18.1. It should be emphasized that curve ln{x/y} lies always below straight line x/y − 1 within the whole positive domain of x/y – except at x/y = 1 where they touch each other, thus fully justifying Eq. (18.109); this guarantees, in turn, validity of Eq. (18.106).

2

Figure 18.1 Graphical representation of logarithm of x/y and x/y minus unity, both as functions of x/y > 0.

x/y – 1

1 ln{x/y}, x/y – 1

824

0 –1

0

1

2

ln{x/y}

–2 –3 –4 –5

x/y

3

Continuous Probability Functions

Consider two continuous probability density functions on x, say f {x} and g{x} – both of which are necessarily positive over the entire real domain; according to Eq. (18.106), one may write f x − f x ln f x ≤ g x − f x ln g x

18 110

once x is replaced by g{x} and y by f {x}, so integration of both sides over ]−∞,∞[ yields ∞



f x − f x ln f x dx ≤

−∞

−∞

g x − f x ln g x dx

18 111

– owing to Riemann’s concept of definite integral, complemented by Eq. (11.134). Upon decomposition of the integrals in both sides, Eq. (18.111) becomes ∞ −∞

f x dx −

∞ −∞

f x ln f x dx ≤

∞ −∞

g x dx −

∞ −∞

f x ln g x dx;

18 112

since, by hypothesis, both f {x} and g{x} denote continuous probability density functions, Eq. (18.4) applies – so Eq. (18.112) simplifies to 1−

∞ −∞

f x ln f x dx ≤ 1 −

∞ −∞

f x ln g x dx,

18 113

which is equivalent to −

∞ −∞

f x ln f x dx ≤ −

∞ −∞

f x ln g x dx

18 114

after dropping unity from both sides (where the equality obviously holds only when f {x} coincides with g{x}). Consider now a probability density function D{X; μ, σ 2}, with mean μ and variance 2 σ (note that the former must exist, otherwise variance cannot be defined at all); if N{X; μ, σ 2} denotes the normal distribution – with the very same mean and variance, then Eq. (18.114) yields −

∞ −∞



D X; μ, σ 2 ln D X; μ, σ 2 dx ≤ −

−∞

D X;μ,σ 2 ln N X;μ,σ 2 dx 18 115

after setting f {X} ≡ D{X} and g{X} ≡ N{X}. Furthermore, the right-hand side of Eq. (18.115) is given by −

∞ −∞

D X; μ, σ 2 ln N X; μ, σ 2 dx = −

∞ −∞



=

−∞

D X; μ, σ 2

D X; μ, σ

2

− ln 2πσ −

1 x− μ 2 σ

1 1 x− μ ln2πσ 2 + 2 2 σ

2

2

dx ,

dx

18 116

825

826

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

1 in agreement with Eq. (18.93) and because ln 2πσ = ln 2πσ 2 = ln 2πσ 2 ; after splitting 2 the integral and taking constants off the kernels, Eq. (18.116) becomes −

∞ −∞

D X; μ, σ 2 ln N X; μ, σ 2 dx ∞

1 = ln 2πσ 2 2

−∞

D X; μ, σ

2

1 dx + 2

∞ −∞

D X;μ,σ

2

x−μ 2 dx σ

,

18 117

where Eq. (18.4) can be taken advantage of to write −



1 1 D X; μ, σ 2 ln N X; μ, σ 2 dx = ln 2πσ 2 + 2 2 2σ −∞

∞ −∞

x−μ 2 D X;μ,σ 2 dx 18 118

– also after taking σ 2 off the kernel in the outstanding integral. The integral in the righthand side of Eq. (18.118) is but the definition of variance of D {X; μ, σ 2} as per Eq. (18.15), so one gets merely −



1 1 1 1 D X;μ, σ 2 ln N X; μ, σ 2 dx = ln 2πσ 2 + 2 σ 2 = ln 2πσ 2 + 2 2σ 2 2 −∞ , 1 = 1+ ln 2πσ 2 = HN X 2

18 119

along with straightforward algebraic manipulation and retrieval of Eq. (18.105). Combination of Eqs. (18.115) and (18.119) finally leads to −

∞ −∞

D X; μ, σ 2 ln D X; μ, σ 2 dx ≤ HN X

18 120

or, in view of the meaning of the left-hand side as per Eq. (18.94), HD X ≤ HN X

18 121

Therefore, the normal probability density function, with given mean and variance, exhibits the highest entropy, HN, when compared with the entropy, HD, of every other continuous probability distribution D{X} with the same mean and variance; this provides a fundamental justification for use hereafter of the normal distribution as statistical reference. 18.2.3

Operational Features

Owing to its shape and properties, the normal distribution is the one most frequently utilized to describe phenomena that are well modeled by a continuous random variable, say, X. As seen before, this distribution has two constitutive parameters: μ, which may take any real value, and σ, which only takes positive values (as positive square root of variance, σ 2); it is often abbreviated to N{X; μ, σ}, and the underlying probability density function reads N X;μ,σ

1 1 x− μ exp − 2 σ 2πσ

2

18 122

Continuous Probability Functions

in parallel to Eq. (18.93). The aforementioned distribution may also be coined, in differential form, as 1 1 x− μ exp − 2 σ 2πσ

dN X;μ, σ =

2

18 123

dx;

dN accordingly denotes the probability that X lies in the elementary interval [x, x + dx], where x denotes a particular value for variable X – in full agreement with Eq. (18.1). In order to check whether Eq. (18.122) can represent a distribution of probability, one should calculate its integral over the whole real axis, viz. ∞ −∞



N X; μ, σ dx =

1 1 x −μ exp − 2 σ 2πσ

−∞

2

dx

18 124

– or, after taking the constant factor off the integration sign, ∞ −∞

N X; μ, σ =

1 2πσ

∞ −∞

exp −

1 x− μ 2 σ

2

dx;

18 125

one may then revisit Eq. (18.97) as x = μ + σy

18 126

that entails dx = σdy

18 127

as differential form, in line with Eq. (18.98). Insertion of Eqs. (18.97) and (18.127) transforms Eq. (18.125) to ∞ −∞

N X; μ, σ =

1 2π



1 y2 exp − σdy, 2 −∞ σ

18 128

which simplifies to ∞ −∞

N X; μ, σ =

Ψ 2π

18 129

after dropping σ inside the kernel – provided that Ψ is defined as Ψ

∞ −∞

exp −

y2 dy 2

18 130

The square of Ψ , as conveyed by Eq. (18.130), may be calculated as ∞

Ψ2 =

−∞

exp −

y2 dy 2

∞ −∞

exp −

w2 dw , 2

18 131

where w denotes a second variable of integration – analogous to y; since the limits of integration in both cases are not dependent on any of the integration variables y or w, Fubini’s theorem allows one to write Ψ2 =





− ∞ −∞

exp −

y2 w2 exp − dydw, 2 2

18 132

827

828

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

which is equivalent to Ψ2 =





−∞ − ∞

exp −

y2 + w2 dydw 2

18 133

after having collapsed the exponential functions. Upon introduction of cylindrical coordinates, i.e. y

r cos θ

18 134

w

r sin θ

18 135

and – with r denoting radial position and θ denoting angular position, one easily obtains x2 + w2 = r 2 cos2 θ + r 2 sin2 θ = r 2 sin2 θ + cos2 θ

18 136

once squares are taken of both sides of Eqs. (18.134) and (18.135), followed by ordered addition thereof and factoring out of r2 as appropriate; in view of the fundamental relationship of trigonometry, Eq. (18.136) reduces to x2 + w2 = r 2

18 137

Change of integration variables in Eq. (18.133) from x and w to r and θ, respectively, is now in order, according to Ψ2 =

2π ∞ 0

0

exp −

r2 2

J drdθ

18 138

– where advantage was taken of Eq. (18.137); note that the new limits of integration encompass θ covering the whole circle amplitude, [0, 2π], and r spanning all radii, [0,∞[, thus accounting for the whole r0θ plane – in the same way it was already spanned by y and w, when they were both (independently) varied between −∞ and ∞. Remember that J denotes the absolute value of the determinant of the underlying Jacobian matrix, given by ∂y ∂y ∂r ∂θ J 18 139 ∂w ∂w ∂r ∂θ in parallel to Eq. (10.481); following combination with Eqs. (18.134) and (18.135), one may redo Eq. (18.139) to J=

cos θ − r sin θ sin θ

r cos θ

18 140

Calculation of the second-order determinant associated with J, as per Eq. (18.140), yields J

cos θ −r sin θ sin θ

r cos θ

= r cos2 θ − −r sin2 θ = r cos2 θ + r sin2 θ = r sin2 θ + cos2 θ , 18 141

Continuous Probability Functions

where the fundamental relationship of trigonometry and the definition of absolute value may be invoked to write J = r =r

18 142

Combination of Eqs. (18.138) and (18.142) leads to Ψ2 =

2π ∞ 0

exp −

0

2π ∞

r2 rdrdθ = − 2

0

exp −

0

r2 2



2r drdθ, 2

18 143

along with multiplication and division by −2; integration in r yields 2π

Ψ2 = −

exp −

0



r2 2



dθ = 0

e0 −e −∞ dθ,

18 144

0

which degenerates to just Ψ2 =



1 − 0 dθ =

0





18 145

0

Integration in θ gives then rise to Ψ2 = θ

2π 0

= 2π

18 146

or, after taking square roots of both sides, Ψ = 2π

18 147

Combination of Eqs. (18.129) and (18.147) finally leads to ∞ −∞

N x; μ, σ =

2π , 2π

18 148

where trivial algebraic simplification unfolds ∞ −∞

N x; μ, σ = 1;

18 149

hence, N{X; μ, σ} satisfies the basic condition for a probability density function, as set forth by Eq. (18.4). 18.2.4

Moment-generating Function

18.2.4.1 Single Variable

Besides providing insight as to the shape of a probability density function, the moments are also useful in comparing two given distributions; in fact, a necessary condition for two distributions be identical is that they share the same moment sequence. However, this is not a sufficient condition – as identical values for the various moments do not unequivocally guarantee that the originating distributions coincide; a sufficient condition implies the same moment-generating function G{s} – defined as G s

E e sX

18 150

829

830

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

for variable X, which must exist and be finite, besides being continuous and differentiable (should it exist). According to Eq. (12.149), one may replace Eq. (18.150) by G s = E 1 + sX +

sX 2

2

+…+

sX i +… i

18 151

following expansion around sX = 0; the above series (holding an infinite number of terms) is convergent, irrespective of the actual value taken by sX – see Eq. (12.151). Once Eq. (18.7) is recalled, one may redo Eq. (18.151) as ∞

G s =

−∞

1 + sx +

s2 2 si x + … + x i + … D x dx, 2 i

18 152

where the distributive property supports transformation to ∞

G s =

−∞

D x + sxD x +

s2 2 si x D x + … + x i D x + … dx; 2 i

18 153

Eq. (11.101) may now be revisited to write ∞

G s =



−∞

D x dx +



−∞

sxD x dx +



s2 2 x D x dx + … + −∞ 2

si i x D x dx + …, −∞ i 18 154

where the constancy of s with regard to integration variable x prompts conversion to ∞

G s =

−∞



D x dx + s

−∞

xD x dx +

s2 2

∞ −∞

x2 D x dx + … +

si i

∞ −∞

x i D x dx + … 18 155

Differentiation of both sides of Eq. (18.155) with regard to s gives rise to dG s = ds

∞ −∞

xD x dx +

2s 2

∞ −∞

x2 D x dx + … +

is i −1 i

∞ −∞

x i D x dx + …, 18 156

which simplifies to dG s = ds

∞ −∞



xD x dx + s

−∞

x2 D x dx + … +

s i −1 i −1

∞ −∞

x i D x dx + …; 18 157

a second differentiation with regard to s likewise produces d2 G s ds2

d dG s ds ds



=

−∞

x2 D x dx + … +

si − 2 i− 2

∞ −∞

x i D x dx + … 18 158

Continuous Probability Functions

When s = 0, Eq. (18.157) reduces to dG s ds



=

−∞

s=0

xD x dx

μ1 = E

E X

de sX ds

18 159 s=0

with the aid of Eq. (18.7); by the same token, s = 0 turns Eq. (18.158) to d2 G s ds2



= s=0

−∞

x2 D x dx

μ2 = E

E X2

d 2 e sX ds2

18 160

, s=0

in agreement with Eq. (18.18) after setting i = 2. Extension of the above reasoning allows one to conclude, in general, that μi

E Xi =

d iG s ds i

s=0

di E e sX ds i

=E s=0

d i e sX ds i

; i = 1,2, …, s=0

18 161 consistent with Eq. (18.18) – thus justifying the designation of moment-generating function ascribed to G{s}. Once in possession of Eqs. (18.7) and (18.150), one may easily write the momentgenerating function for a normal distribution as ∞

GN s

−∞

e sx N x; μ, σ dx,

18 162

where further insertion of Eq. (18.122) gives rise to ∞

GN s =

−∞

e sx

1 1 x− μ exp − 2 σ 2πσ

2

18 163

dx;

after taking out constant factors and condensing exponential terms, Eq. (18.163) becomes GN s =

1 2π

∞ −∞

exp sx −

1 x− μ 2 σ

2

dx σ

18 164

In view of Eqs. (18.97), (18.98), and (18.126), one may redo Eq. (18.164) to GN s =

1 2π

∞ −∞

exp μs + σsy−

y2 dy, 2

18 165

where the (infinite) limits of integration were kept after taking Eq. (18.99) in consideration. Equation (18.165) may be rewritten as GN s =

1 2π

∞ −∞

e μs exp σsy−

y2 e μs dy = 2 2π

∞ −∞

exp σsy−

y2 dy, 2

18 166

after splitting the composite exponential function and taking eμs off the kernel – or, upon lumping the two terms in the argument of the exponential function of the kernel, GN s =

e μs 2π

∞ −∞

exp

2σsy− y2 dy; 2

18 167

831

832

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

addition and subtraction of (σs)2 in said argument, followed by splitting of the result give e μs 2π



e μs = 2π



GN s =

−∞

−∞

exp

− σs 2 + 2σsy− y2 + σs 2

exp

σs 2

2

2

dy 18 168

2

2σsy− y2 − σs exp 2

dy

– whereas factoring out of the constant factor turns Eq. (18.168) to GN s =

e μs exp 2μ

σs 2



2

−∞

exp −

2

y2 − 2σsy + σs 2

18 169

dy

Newton’s binomial formula permits condensation of Eq. (18.169) as GN s =

1 σs exp μs + 2 2π



2

−∞

exp −

y − σs 2

2

18 170

dy,

where the exponential functions prior to the integral were meanwhile collapsed. For convenience, −σs may now be made apparent under the differential sign of Eq. (18.170), viz. GN s =

1 σs exp μs + 2 2π



2

−∞

exp −

y − σs 2

2

d y − σs ,

18 171

due to its constancy – without need to change the limits of integration, as −σs is finite; the integral in Eq. (18.171) is equivalent to the left-hand side of Eq. (18.72), as long as y − σs replaces ξ, so one gets merely GN s =

1 σs exp μs + 2 2π

After having canceled

2



18 172

2π in numerator and denominator, Eq. (18.172) breaks down to

GN s = exp μs +

1 σs 2

2

18 173

as final form for the moment-generating function of N{X; μ, σ}; Eq. (18.173) often appears as GN s = exp μs +

σ2 2 s , 2

18 174

thus outlining its intrinsic functionality as exponential of a second-order polynomial in s. After combining Eqs. (18.159) and (18.173), one obtains μ1, N =

dGN s ds

1 1 = μ + 2 σs σ exp μs + σs 2 2 s=0

= μ + σ 2 s exp μs +

1 σs 2

2

2 s=0

= μ + 0 exp 0 + 0 s=0

;

18 175

Continuous Probability Functions

Eq. (18.175) simplifies to E X

μ1, N = μ

18 176

N X ; μ, σ

– as expected, in view of the original definition of μ as mean of the normal distribution N{X; μ, σ}. Further differentiation of Eq. (18.175) – after recalling Eq. (18.160), gives rise to μ2, N =

d 2 GN s d s2

d dGN s ds ds

s=0

= σ 2 exp μs + = σ 2 exp μs +

1 σs 2 1 σs 2

2

= s=0

d ds

μ + σ 2 s exp μs +

+ μ + σ 2 s exp μs +

1 σs 2

2

2

+ μ + σ 2 s exp μs +

2

1 μ + 2 σs σ 2

2

1 σs 2

1 σs 2

s=0

, s=0

2 s=0

18 177 based on the classical theorems of diferentiation of a product, a sum, an exponential, and a composite function, coupled with lumping of factors alike; Eq. (18.177) reduces to μ2, N = σ 2 exp 0 + 0 + μ + 0 2 exp 0 + 0 = σ 2 + μ2

18 178

In view of Eqs. (18.28), (18.176), and (18.178), one may write Var X = μ2, N − μ1, N 2 = σ 2 + μ2 − μ2

18 179

that reduces to Var X = σ 2

N X ; μ, σ

;

18 180

this result was again anticipated, given the definition of σ 2 as variance of the normal distribution N{X; μ, σ} in the first place. Before proceeding to the next stage, it is convenient to apply Eq. (18.7) to the situation where X is replaced by (constant) k, i.e. ∞

E k



−∞

kD x dx = k

−∞

D x dx = k,

18 181

at the expense of Eq. (18.4); the result above is not affected by the form of D{X}, so the expected value of a constant coincides with itself. By the same token, ∞

E kX

−∞



kxD x dx = k

−∞

xD x dx = kE X ,

18 182

where Eq. (18.7) was taken into account, and advantage further taken of the constancy of k; Eqs. (18.181) and (18.182) confirm the linearity of operator E. If the original random variable X – bearing GX s

E e sX

18 183

833

834

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

as associated moment-generating function in line with Eq. (18.150), undergoes a linear transformation to generate a new random variable Y, defined as Y

a + bX,

18 184

then Eq. (18.150) may again be invoked to write GY s

GX s

X

a + bX

= E e sX

X

a + bX

;

18 185

here a and b denote (given) constants. Equation (18.185) may undergo algebraic rearrangement to read GY s = E e s a + bX

= E e as + bsX = E e as e bsX ,

18 186

at the expense of the operational features of an exponential function; since eas is a constant, Eq. (18.182) may be used to transform Eq. (18.186) to GY s = e as E e bs X

18 187

– or, equivalently, Ga + bX s = e as GX bs

18 188

using Eq. (18.150) as template, and to write E{e(bs)X} as GX{bs} ≡ GX{s}|s bs. Equation (18.188) applies no matter which statistical distribution is at stake – and, in particular, it will read Ga + bN s = e as GN bs

e as GN s

18 189

s bs

for a normal distribution; insertion of Eq. (18.173) permits transformation of Eq. (18.189) to Ga + bN s = e as exp μs + = exp

1 σs 2

2

= exp as + μ bs + s bs

1 σ bs 2

2

1 a + bμ s + bσ 2 s2 2 18 190

Upon comparative inspection with Eq. (18.173), one may reformulate Eq. (18.190) to 1 Ga + bN s = exp μs + σ 2 s2 2

μ σ

a + bμ bσ

= GN s

μ σ

a + bμ ; bσ

18 191

consequently, one may jump to Eq. (18.176) and write E a + bN

a + bμ

a + bE N

18 192

as analogue applying to the mean after μ has been replaced by a + bμ, as well as Var a + bN

b2 σ 2

b2 Var N

18 193

Continuous Probability Functions

that resorts to Eq. (18.180) as template, after replacing σ by bσ. The above reasoning may logically be extended so as to encompass any number M of terms, i.e. M

M

a i + bi N i

E

M

=

i=1

ai + i=1

stemming from Eq. (18.192) – after replacing a by and likewise M

18 194

bi E N i i=1 M

a i=1 i

and bE{N} by

M

bE i=1 i

Ni ;

M

ai + bi Ni

Var

bi 2 Var Ni

=

i=1

18 195

i=1

based on Eq. (18.193) – upon swapping b2Var{N} for of the number and actual values of the bi’s.

M

b i=1 i

2

Var Ni , valid irrespective

18.2.4.2 Multiple Variables

The concept of moment-generating function, associated with a single random variable, may be generalized to the joint moment-generating function, pertaining to a vector X of N random variables, viz. X1 X2

X

,



18 196

XN according to GX s

E exp s T X

18 197

Equation (18.150) was utilized as template to produce Eq. (18.197), with s denoting an (N × 1) column vector defined as s1 s

s2 …

;

18 198

sN the GXi ’s are taken as finite, continuous, and differentiable functions within the range of interest of si, and the Xi’s are independent from (or uncorrelated to) each other. In view of the algorithm of multiplication of a row vector (i.e. sT) by a column vector (i.e. X) as a particular case of Eq. (4.47), coupled with the mathematical properties of an exponential function, Eq. (18.197) may be rewritten as N

GX s = E exp

N

s i Xi i=1

e si Xi

=E i=1

18 199

835

836

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

On the other hand, one realizes that ∞

E Z1 Z2



− ∞ −∞



z1 z2 D1, 2 Z1 , Z2 dz1 dz2 =



−∞ − ∞

z1 z2 D1 Z1 D2 Z2 dz1 dz2 18 200

by definition of joint probability function as extrapolation of Eq. (18.7) – where application of Fubini’s theorem unfolds ∞

E Z1 Z2 =

−∞



z1 D1 Z1 dz1

−∞

z2 D2 Z2 dz2 = E Z1 E Z2 ,

18 201

along with Eq. (18.7) once more; here Z1 and Z2 are taken as independent random variables, so that the joint probability density function D1,2{Z1,Z2} relates to the individual probability density functions, D1{Z1} and D2{Z2}, via their product, viz. D1,2{Z1,Z2} = D1{Z1}D2{Z2}. Since the result conveyed by Eq. (18.201) can be readily extended to N random variables, Z1, Z2, …, ZN, i.e. E Z1 Z2

ZN = E Z1 E Z2

E ZN ,

18 202

one may convert Eq. (18.199) to N

GX s =

E e si Xi

18 203

i=1

– after setting Zi e si Xi , since e si Xi is a random variable when Xi is itself a random variable; Eq. (18.203) is equivalent to writing N

GX s =

GXi s i ,

18 204

i=1

in view of Eq. (18.150). If all Xi’s follow independent normal distributions – each characterized by mean μi and variance σ 2i , then one may rewrite Eq. (18.204) as N

GN s =

exp μi si + i=1

1 σ i si 2

2

,

18 205

following (consecutive) insertion of Eq. (18.173) in Eq. (18.204); the associated mean vector and covariance matrix should consequently read

μ

μ1 μ2 … μN

Σ

σ12 0 0 σ22 … … 0 0

18 206

and … 0 … 0 , … … … σN 2

18 207

Continuous Probability Functions

respectively – where the diagonal nature of the (N × N) matrix, i.e. with only zeros above and below the main diagonal, reflects the independence between the Xi’s postulated above. Since the extended product of exponentials is convertible to the exponential of the associated summation, Eq. (18.205) may take the alternative form N

GN s = exp

μi si + i=1

1 σ i si 2

N

2

N

μi s i +

= exp i=1

N

1 N = exp μi si + σ i si 2 i=1 i=1

1 σ i si 2 i=1

2

18 208

N

2

1 N = exp s i μi + si σ 2 si 2 i=1 i i=1

– where straightforward algebraic rearrangement meanwhile took place; based on the algorithm of multiplication of matrices as per Eq. (4.47), one may reformulate Eq. (18.208) to 1 GN s = exp s T μ + s T Σ s , 2

18 209

with s, μ, and Σ defined by Eqs. (18.198), (18.206), and (18.207), respectively. + nN)th cross-order moments of the aforeIn attempts to produce the (n1 + n2 + mentioned joint probability density function, one should resort to (n1 + n2 + + nN)th order differentiation of Eq. (18.197), simultaneously ni times with regard to each si; one should also realize that the relationship labeled as Eq. (18.161) – entailing exchange∂ ni ability of operators E and di/dsi (for s = 0), may be extended to any n . Under these ∂si i circumstances, Eq. (18.197) supports ∂ n1 + n2 + … + n N G X s ∂s1 n1 ∂s2 n2 … ∂sN nN

s = 0N × 1

∂ n1 + n2 + … + nN E exp s T X ∂s1 n1 ∂s2 n2 … ∂sN nN =E

∂ n1 + n2 + … + nN exp s T X ∂s1 n1 ∂s2 n2 … ∂sN nN

s = 0N × 1

s = 0N × 1

N

∂ n1 + n2 +

+ nN

exp

s i Xi

i=1 ∂s1n1 ∂s2n2 … ∂sNnN

=E

s = 0N × 1

18 210 the rule of differentiation of the exponential of a sum leads, in turn, to N

∂ n1 + n2 + … + n N G X s ∂s1 n1 ∂s2 n2 … ∂sN nN

s i Xi X1 n1 X2 n2 … X N nN

exp =E s = 0N × 1

i=1

∂s1 n1 ∂s2 n2 … ∂sN nN s = 0N × 1

18 211

837

838

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– where the differential operator, ∂/∂si, was sequentially applied as N

∂exp

sj Xj i=1

∂si

N

= exp j=1

N

= exp

N

∂ ∂si

s j Xj

s i Xi +

sj Xj j=1 j i

18 212

∂ si Xi + 0 = exp ∂si

sj Xj j=1

N

sj Xj Xi j=1

the powers in Eq. (18.211) may then be collapsed to generate N

∂ n1 + n2 + + nN G X s = E exp ∂s1n1 ∂s2n2 … ∂sN nN After setting s1 = s2 =

N

Xi ni

s i Xi i=1

18 213

i=1

= sN = 0, Eq. (18.213) degenerates to

∂ n1 + n2 + + nN G X s ∂s1n1 ∂s2n2 … ∂sN nN

N

= E exp s = 0N × 1

N

Xi ni

0Xi i=1

i=1

N

Xi ni

= E e0

18 214

N

Xi ni

=E

i=1

i=1

Sequential application of Eq. (18.202) finally supports transformation of Eq. (18.214) to ∂ n1 + n2 + + nN G X s ∂s1n1 ∂s2n2 … ∂sN nN

N s = 0N × 1

N

E Xi ni =

= i=1

E ni Xi ;

18 215

i=1

hence, the joint moment-generating function leads (via partial differentiation) to the cross moments of the overall distribution – which are, in turn, accessible via the product of the corresponding moments of the individual distributions. Consider now a linear transformation of the original vector of random variables, X, to another vector of random variables, Y, following Y

A + BX

18 216

– where A denotes an (N × 1) vector and B denotes an (N × N) matrix; upon insertion of Eq. (18.216) in Eq. (18.197), one obtains GY s

GX s

X

A + BX

= E exp s T X

= E exp s T A + s T BX

X

A + BX

= E exp s T A + BX

= E exp s T A exp s T BX 18 217

with the aid of the operational features of an exponential function. The vector analogue of Eq. (18.182), i.e. E kTX = kTE X

18 218

Continuous Probability Functions

provided that k1 k

k2 …

,

18 219

kN may now be invoked to transform Eq. (18.217) to GY s = exp s T A E exp = exp s T A GX s

sT B X

= exp s T A E exp

BT s

T

X ,

s

BT s

18 220 after recalling Eq. (18.197) – while taking advantage of the constancy of exp{sTA}, and using Eq. (4.120) pertaining to the transpose of a product of matrices; Eq. (18.220) may finally be coined as GA + BX s = exp s T A GX B T s ,

18 221

which resembles Eq. (18.188) in matrix form. In the particular case of normal distributions, Eq. (18.221) degenerates to 1 GA + BN s = exp s T A GN B T s = exp s T A exp s T μ + s T Σ s 2 = exp s T A exp

T

BT s

μ+

1 T B s 2

T

s

BT s

Σ BT s 18 222

with the aid of Eq. (18.209) – where the known properties of the transpose of a product of matrices, the transpose of a transpose, the distributive property, and the product of exponentials permit transformation to GA + BN s = exp s T A exp s T B T

T

= exp s T A exp s T Β μ +

μ+

1 T T s B 2

1 T s Β Σ BT s 2

= exp s T A + s T B μ +

1 T s B Σ BT s 2

= exp s T A + B μ +

1 T s B Σ BT s 2

T

Σ BT s

;

18 223

consequently, one finds GA + BN s = exp s T μ +

1 T s Σs 2

μ Σ

A + Bμ B Σ BT

18 224

839

840

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in view of Eq. (18.209). Note, in this particular, that μ1, i

E Xi =

∂GN s ∂si

18 225 s = 0N × 1

after Eq. (18.215) with ni = 1 and nj i = 0, and thus E X =μ

18 226

N X ; μ, Σ

– consistent with Eq. (18.206) and mimicking Eq. (18.176) in functional form. By the same token, μ2, i, j

E Xi Xj =

∂ 2 GN s ∂si ∂sj

18 227 s = 0N × 1

following Eq. (18.215) again, but with ni = nj = 1 for i j, and ni = 2 for i = j and nk i,j = 0; however, independence of Xi from Xj (when i j) implies E{XiXj} = 0, thus causing Eq. (18.227) reduce to μ2, i

μ2, i, i

E Xi 2 =

∂ 2 GN s ∂si 2

18 228 s = 0N × 1

complemented by μ2, i, j i = 0

18 229

One may therefore retrieve the derivation of μ2,N as per Eqs. (18.177) and (18.178), with results condensed in μ2, i = σ 2i + μi 2 ;

18 230

in view of Eq. (18.28), one gets Var Xi = σ 2i

18 231

from Eq. (18.230), and in parallel to Eq. (18.180) – coupled with Cov Xi , Xj = 0

18 232

– so one ends up with Var X = ∑

N X ; μ, Σ

,

18 233

in agreement with Eq. (18.207). Exchange between μ and A + Bμ, as indicated in Eq. (18.224), consequently supports transformation of Eq. (18.226) to E A + BN = A + Bμ

N X ; μ, Σ

=A+BE N

18 234

which resembles Eq. (18.192); one may similarly replace Σ by BΣBT in Eq. (18.233) as also suggested by Eq. (18.224), thus giving rise to Var A + BN = B Σ

N X ; μ, Σ

B T = B Var N B T

– essentially analogous to Eq. (18.193).

18 235

Continuous Probability Functions

Although only independent distributions were considered so far when dealing with linear combinations of the corresponding random variables – as happens in most cases with practical interest, this does not in fact correspond to the most general situation. Consider, in this regard, two random variables X and Y, and a linear combination thereof, say, W

e + f X + gY

18 236

– where e, f, and g denote arbitrary constants; the expected value of (random variable) W should, in general, read wDW w dw =

E W w

w x,y DX , Y x,y dxdy y x

18 237

e + f x + gy DX , Y x, y dxdy

= y x

as per Eq. (18.7), where DX,Y denotes the joint probability density function of X and Y, and Eq. (18.236) was used to advantage. Decomposition of the last double integral in Eq. (18.237) leads to eDX , Y x,y dxdy +

E W = y x

f xDX , Y x,y dxdy + y x

DX , Y x,y dxdy + f

=e y x

gyDX , Y x,y dxdy y x

xDX , Y x,y dxdy + g y x

; yDX , Y x,y dxdy

y x

18 238 as happens with any other continuous probability density function, definition of DX,Y {X,Y} implies closure as DX , Y x,y dxdy = 1

18 239

y x

based on Eq. (18.4) – so Eq. (18.238) may be recoined as E W =e+f

DX , Y x, y dy dx + g y

x x

y

y

DX , Y x,y dx dy,

18 240

x

with the aid also of Fubini’s theorem. Since DX x

DX , Y x,y dy,

18 241

DX , Y x, y dx

18 242

y

and likewise DY y x

841

842

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

as per the definition of marginal probability density functions, one may reformulate Eq. (18.240) to xDX x dx + g yDY y dy = e + f E X + g E Y

E W =e+f x

18 243

y

where Eq. (18.7) was again invoked. Therefore, one concludes that E e+f X +gY =e+f E X +gE Y

18 244

obtained with the aid of Eq. (18.236), or else μW = e + f μX + gμY

E e+f X +gY

18 245

– consistent with E X

μX

xDX x dx

18 246

yDY y dy,

18 247

x

and E Y

μY

y

both formulated in parallel to Eq. (18.7). By the same token, Var W

E =E

w− μW

2

=E

e + fx + gy − e + f μX + gμY

f x−μX + g y − μY

2

18 248

2

stemming from Eqs. (18.15), (18.236), and (18.245) – where e was canceled with its negative, while f and g were factored out; Newton’s binomial formula may then be recalled to obtain Var W = E f 2 x− μX

2

+ 2f x−μX g y − μY + g 2 y − μY

2

18 249

The linearity of the E operator, as highlighted in Eq. (18.244), allows transformation of Eq. (18.249) to Var W = E f 2 x−μX

2

+ E 2fg x− μX y − μY

+ E g 2 y −μY

2

= f 2 E x− μX

2

+ 2fg E x− μX y − μY

+ g 2 E y − μY

2

;

18 250

this is equivalent to stating that Var e + f X + g Y = f 2 Var X + 2fg Cov X,Y + g 2 Var Y ,

18 251

consistent with Eqs. (18.15) applied to variables X and Y, and Eq. (18.236) for the definition of W – to be complemented with Cov X,Y

E x− μX y − μY

18 252

that serves as definition of covariance of X and Y. When X and Y are independent, Cov{X,Y} is obviously nil – in which case Eq. (18.251) would support Var{e + f X + gY} = f 2Var{X} + g2Var{Y}; this result agrees with Eq. (18.193), pertaining to normally distributed variables X and Y.

Continuous Probability Functions

18.2.5

Standard Probability Density Function

Although useful, Eq. (18.122) defines as many Gaussian distributions as there are combinations of parameters μ and σ; in attempts to reduce overparametrization, variable Z may be defined to advantage as x− μ z 18 253 σ – in much the same way Y was defined via Eq. (18.97), which implies dx 18 254 dz = σ Equations (18.253) and (18.254) allow reformulation of Eq. (18.123) to 1 1 exp − z2 σdz; 18 255 2 2πσ after cancelling σ between numerator and denominator of the right-hand side, Eq. (18.255) becomes dN Z =

1 1 18 256 exp − z2 dz, 2 2π which is μ- and σ-independent. Note that Eq. (18.256) is formally equivalent to dN Z =

dN Z =

1 1 z−μ exp − 2 σ 2πσ

2

μ=0 σ=1

dz

dN Z; 0, 1 ,

18 257

which justifies the alternate (and simpler) labeling proposed; the final form of this standard normal distribution is more appropriately represented by N Z; 0, 1

1 z2 , exp − 2 2π

18 258

to be used hereafter en lieu of Eq. (18.122). Therefore, all possible normal distributions collapse into a single curve – with mean zero and unit variance, provided that Z is defined via Eq. (18.253); a graphical representation of N{Z; 0, 1} is provided in Fig. 18.2a. The resulting bell-shaped curve is to be outlined, symmetrical relative to the vertical axis; in fact, N − Z; 0, 1

1 −z exp − 2 2π

2

=

1 z2 , exp − 2 2π

18 259

obtained from Eq. (18.258) as template, coincides with Eq. (18.258) itself, i.e. N − Z; 0, 1 = N Z; 0, 1

18 260

– and this symmetry around the vertical axis necessarily entails a nil median, viz. Med Z = 0,

18 261

in agreement with Eq. (18.39). Furthermore, this distribution exhibits (quickly) decreasing tails – as realized from inspection of lim N Z; 0, 1 =

Z ±∞

1 ±∞ exp − 2 2π

2

=

1 −∞ e = 0, 2π

18 262

843

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

(a)

0.5

N {Z; 0,1}

0.4 0.3 0.2 0.1

–4

–3

–2

(b)

–1

0.0

0 Z

1

2

3

4

(c)

0 Z

zcrt,2

zcrt,1

α/2

α

α/2

N {Z; 0,1}

N {Z; 0,1}

844

0

zcrt,2

Z

Figure 18.2 Graphical representation of standard normal probability density function, N{Z; 0,1}, as a function of random variable Z, with indication of (a) inflection points, and (b) unilateral and (c) ) under the curve tail(s), corresponding to (combined) probability α of bilateral areas ( occurrence (b) above zcrt,2 or (c) below zcrt,1 or above zcrt,2.

based again on Eqs. (18.258) and (18.260), and also apparent in Fig. 18.2a. The monotony characterizing N{Z; 0, 1} abides to dN Z; 0, 1 1 z2 2z z z2 − , 18 263 = exp − =− exp − 2 dz 2 2 2π 2π as per Eq. (18.258); since the exponential function takes only positive values, Eq. (18.263) means that said function increases for z < 0 and decreases otherwise – as can be grasped in Fig. 18.2a. In fact, a critical point, described by dN Z; 0, 1 = 0, dz will arise, according to z z2 − =0 exp − 2 2π

18 264

18 265

Continuous Probability Functions

stemming from Eq. (18.263); the trivial solution of Eq. (18.265) looks like z

dN Z ; 0, 1 =0 dz

= 0,

18 266

so one concludes that Mod Z = 0

18 267

in view of the definition of mode as conveyed by Eq. (18.37). A second step of differentiation of Eq. (18.258) produces d 2 N Z; 0, 1 dz2

d dN Z; 0, 1 dz dz

d z z2 − exp − dz 2 2π

1 z2 z z2 − exp − exp − 2 2 2π 2π

=− =

=



2z 2

18 268

1 z2 z2 − 1 exp − 2 2π

based on Eq. (18.263), and upon factoring out exponential function and (constant)

1 ; 2π

for Z = 0 as set by Eq. (18.266), one obtains d 2 N Z; 0, 1 dz2

= z=0

1 1 02 − 1 e − 0 = − xcrt,2.

18.3.2.5 Mode and Median

One may calculate the median of a χ 2-distribution, x = Med, based on Eqs. (18.46) and (18.453), according to ν Med 1 , = 2 2 2

Γi

18 456

Although such a value becomes accessible only numerically, following calculation via Eq. (18.454), a good approximation is provided by Med X ≈ ν 1 −

2 9ν

3

18 457

– where 1 − 2/9ν < 1, and thus (1 − 2/9ν)3 < 1 implies that the median is located to the left of the mean, see Eq. (18.433). Moreover, 1 − 2/9ν > 0 (for a meaningful result) implies ν > 2/9; since the minimum value of ν is 1, Eq. (18.457) is essentially unconstrained. The mode will, on the other hand, become accessible via d 2 χ X; ν dx

=0

18 458

x = Mod X

as necessary condition – see Eq. (18.37); insertion of Eq. (18.420) permits transformation to ν

x

d x2 −1 e − 2 dx 22ν Γ ν 2

= 0,

18 459

which degenerates to

ν

1

22 Γ

ν 2

ν x ν x ν 1 −1 x2 − 2 e − 2 + x2 − 1 e − 2 − 2 2

ν

x

x2 − 2 e − 2 ν x −1− = 0 = ν ν 2 2 22 Γ 2

18 460

upon classical differentiation, followed by factoring out of xν/2−2e−x/2. Equation (18.460) enforces ν x −1− = 0 2 2

18 461

875

876

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

since only x > 0 is of interest (irrespective of ν), or else x = ν −2

18 462

after solving for x. Hence, a local maximum exists for ν ≥ 2, as indicated by Eq. (18.462) for x > 0; otherwise the global maximum would be dictated by the constraint x ≥ 0, i.e. Mod X = max ν −2, 0 ,

18 463

as apparent from inspection of Fig. 18.5a. Note that Mod{X} = ν − 2 < ν = E{X}, in view of Eqs. (18.433) and (18.463), means that the mode is located to the left of the mean, in the case of a χ 2-distribution.

18.3.2.6 Other Features

Consider now a random variable X, defined as X

X −μ 2 , σ

Z2

18 464

where Z denotes a standard normal random variable with original mean μ and standard error σ – see Eq. (18.253); the probability cumulative function of X abides to x

P Z2 < x = P − x < Z < x =

P X k – in much the same way Eq. (18.523) makes sense only when ν > 2. Since existence of a momentgenerating function requires that the kth-order moment exists and is finite for every k, no such function can be defined for Student’s t-distribution. 18.3.3.3 Asymptotic Behavior

Unlike Gauss’ distribution that applies to a population proper, a t-distribution typically describes sample(s) drawn from the said population – for which σ 2 is not known in advance, but is to be estimated via s2 of such a sample (as emphasized previously); therefore, different values for such a statistic are found for different sample sizes – but the larger the sample (i.e. the higher ν), the more it resembles a normal distribution, as apparent from inspection of Figs. 18.2a and 18.6a. This point can be mathematically supported via

885

886

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

ν+1 ν+1 − t2 2 2 lim t T ;ν = lim 18 526 ν νlim∞ 1 + ν ν ∞ ν ∞ πν Γ 2 stemming from Eq. (18.494) – where, for convenience, the two consecutive limits will be calculated separately; note that when ν is odd, ν ∞ implies that (even) ν + 1 ∞ – so the limit under scrutiny may be calculated just for ν even. Under such circumstances, Eq. (12.412) may to advantage be recalled to write Γ

Γ

ν+1 2

ν = 2n

2n + 1 1 1 +n = + n−1 =Γ 2 2 2



= n−

Γ

1 + n −1 2

1 1 Γ n− 2 2 18 527

by the same token, Γ

ν 2

ν = 2n



2n = Γ n = n−1 , 2

18 528

as per Eq. (12.400). The recursive relationship labeled as Eq. (18.527) may then be sequentially applied as Γ

2n + 1 1 1 1 = n− Γ n− = n− 2 2 2 2 = n−

n−

1 2

n−

3 2

n−

5 5 Γ n− 2 2

1 2

n−

3 2

n−

5 2

3 3 Γ n− 2 2

18 529

= = n−

31 1 Γ 22 2

– where insertion of Eq. (12.423) and elimination of parentheses permit transformation to Γ

2n + 1 2n − 1 2n −3 2n − 5 = 2 2 2 2

31 22

π=

2n −1 2n− 3 2n −5 2n

3 1

π

18 530 Γ When ν grows unbounded, Γ lim

ν ∞ ν = 2n

ν+1 2 ν is driven by πv Γ 2

ν+1 2n − 1 2n − 3 3 1 π n 2 2 ν = 2nlim∞ π2n n − 1 πν Γ 2 = lim

n ∞

2n −1 2n − 2 2n −3 2n− 4 4 3 2 1 n 2n 2 n −1 2n− 2 2n −4 4 2

,

18 531

Continuous Probability Functions

using Eqs. (18.528) and (18.530) – where both numerator and denominator were meanwhile multiplied by 2n − 2, 2n − 4, …, 4, and 2, while getting rid of π. Upon factoring 2 out of every factor in denominator and recalling the concept of factorial, Eq. (18.531) can be redone to Γ lim

ν ∞ ν = 2n

ν+1 2n − 1 2n − 2 2n− 3 2n −4 4 3 2 1 2 ν = nlim∞ n n−1 2n 2 n − 1 2 n− 1 n− 2 2 1 πν Γ 2 2n − 1 2n− 1 = lim = lim n ∞ 2n 2n 2 n− 1 n −1 n−1 n ∞ 2n 2 2n −1 n− 1 = lim

n ∞

2n −1 2n 22n− 1 n −1

n− 1

2

18 532 Stirling’s formula, as conveyed by Eq. (12.501), supports transformation of Eq. (18.532) to 1

Γ lim

ν ∞ ν = 2n

ν+1 2π 2n − 1 2n−1 + 2 2 e2n− 1 ν = nlim∞ πν Γ 2π n −1 n −1 2 2n 2 2n−1 e n−1

, 1 2 +2

18 533

where condensation of factors alike gives rise to 1

Γ lim

ν ∞ ν = 2n

2n −1 + ν+1 2 2π 2n − 1 2n− 1 2n −1 2n− 1 2n−1 2 e = lim = lim 2n− 1 ν n ∞ n ∞ 2π 2n e 2n−2 2n−1 n−1 πν Γ 2n −1 2n 2 2π 2 e2n −2

= lim

2n − 1 2n − 1 2n 2n − 2

n ∞

2n−1

2π e 18 534

Further algebraic manipulation of Eq. (18.534) unfolds Γ lim

ν ∞

ν+1 ν+1 Γ 2 2 ν = νlim∞ ν = nlim∞ πν Γ πν Γ ν = 2n 2 2

2n− 1 2n− 1 2n− 2 + 1 2n 2n− 2 2n − 2 2π e

=

1 lim 2π e n ∞

2n − 1 2n −1 1 lim 1+ 2n 2n−2 2n− 2 ∞ 2n− 2

=

e lim 2π e n ∞

2n 2n 1 1 = 11= 2n 2n 2π 2π

2n− 2

2n−2

,

18 535

887

888

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

since 2n − 2 ∞, 2n − 1 2n, and 2n − 2 2n when n ∞ – where the definition of Neper’s number was retrieved to advantage, see Eq. (12.160). The limit pending in Eq. (18.526) may, in turn, be worked out as t2 lim 1 + ν ∞ ν

ν+1

− 2

t2 1+ ν

= lim

ν ∞

t2 1+ ν 1 ν −2

t2 = 1 lim 1 + ν ∞ ν

1 ν −2

t2 = lim 1 + ν ∞ ν

t2 = lim 1 + ν ∞ ν

t2 lim 1 + ν ∞ ν

1 ν −2

1 ν −2

18 536 via application of the classical theorems on limits, in general agreement with Eq. (9.108); the limit pending in Eq. (18.536) is accessible via Eq. (12.173), viz. t2 lim 1 + ν ∞ ν

ν+1

− 2

= et

2

− 12

= e−

t2 2

18 537

Insertion of Eqs. (18.535) and (18.537) transforms Eq. (18.526) to lim t T ;ν =

ν ∞

t2 1 e − 2 = N T ; 1, 0 , 2π

18 538

where Eq. (18.258) was retrieved; hence, Eq. (18.538) confirms that the standard normal density function represents the asymptotic curve labeled as N = ∞ in Fig. 18.6a – and also corroborates the validity of the central limit theorem, see Eq. (18.283), in the particular case of a t-distribution. 18.3.3.4 Probability Cumulative Function

The probability cumulative function associated with a t-distribution is obtainable from Eqs. (18.3) and (18.494), according to 1 ν ν+1 t + Γ 2 − 2 t 0 t 2 2 t T ; ν dt = t T ; ν dt + 1 1+ dt Pt T < t ν 1 ν −∞ −∞ 0 Γ ν2 Γ 2 2 , t

1 = + 2

1 1 1 ν ν2 Β , 2 2

0

ν+1 2 − 2

t 1+ ν

dt

18 539 with convenient splitting of the integral, insertion of Eqs. (18.494) and (18.505), and the aid of Eq. (12.423); as will be fully derived in due course, Eq. (18.539) eventually gives rise to Pt T < t =

1 + 2

1 ν2

t 1 ν Β , 2 2

2 H1

1 + ν 1 3 t2 , , ;− ; t > 0, 2 2 2 ν

18 540

yet no closed-form analytical solution exists. The hypergeometric function in Eq. (18.540) is given by

Continuous Probability Functions

2 H1

ξ, ψ, ζ; x

∞ i=0

ξ

ψ i xi , ζ i i i

18 541

where (a)i denotes Pochhammer’s symbol – thus representing a(a + 1)(a + 2)…, in agreement with Eq. (2.277); the form of w = 2H1{ξ,ψ,ζ; x} conveyed by Eq. (18.541) results from application of Frobenius’ method to solve Euler’s hypergeometric equation, viz. x 1−x

d2 w dw 2 + ζ − ξ + ψ + 1 x dx −ξψw = 0 dx

18 542

The underlying symmetry around t = 0, conveyed by Eq. (18.497), accounts for term ½ in 0 1 t T ; ν dt = ; the probability associated with t < 0 will become Eq. (18.540), i.e. 2 −∞ readily accessible via Pt T < − t = 1 − P 0 < T < t ,

18 543

also at the expense of said symmetry. The continuous form of Pt{T < t} is made available in Fig. 18.6b, for selected numbers of degrees of freedom; the odd nature of Pt{T < t} − ½ is readily perceived – being consistent with the explicit linear dependence on t in 1 + ν 1 3 t2 , , ;− as only t2 appears Eq. (18.540), coupled with the even nature of 2 H1 2 2 2 ν as variable in its argument. The quantiles for this distribution are listed in Table 18.3, for selected significance levels, α. For a given α in the left column, the quantiles are Table 18.3 Critical, unilateral and bilateral, quantiles of t-statistic, tcrt, for two levels of significance. Significance level (α) Number of degrees of freedom (ν)

1

Unilateral 0.05

−6.314

Bilateral 0.01

6.314

−31.821

0.05

31.821

−12.706

0.01

12.706

−63.657

63.657

2

−2.920

2.920

−6.965

6.965

−4.303

4.303

−9.925

9.925

3

−2.353

2.353

−4.541

4.541

−3.182

3.182

−5.841

5.841

4

−2.132

2.132

−3.747

3.747

−2.776

2.776

−4.604

4.604

5

−2.015

2.015

−3.365

3.365

−2.571

2.571

−4.032

4.032

10

−1.812

1.812

−2.764

2.764

−2.228

2.228

−3.169

3.169

15

−1.753

1.753

−2.602

2.602

−2.131

2.131

−2.947

2.947

20

−1.725

1.725

−2.528

2.528

−2.086

2.086

−2.845

2.845

30

−1.697

1.697

−2.457

2.457

−2.042

2.042

−2.750

2.750

40

−1.684

1.684

−2.423

2.423

−2.021

2.021

−2.704

2.704

60

−1.671

1.671

−2.390

2.390

−2.000

2.000

−2.660

2.660



−1.645

1.645

−2.325

2.325

−1.960

1.960

−2.575

2.575

889

890

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

the negatives of those in the adjacent right column, due to the even nature of the t-distribution; once again, the value in the left column of the unilateral case, say, tcrt,1, entails the upper boundary of the abscissa for the area under the curve corresponding to a probability α of t < tcrt,1 – and likewise the value in the right column, say, tcrt,2, entails the lower boundary of the abscissa to calculate the area under the curve corresponding to a probability α that t > tcrt,2. In the bilateral case, the area under the probability density function located before tcrt,1 or after tcrt,2 corresponds to an overall probability α. As a consequence of Eq. (18.538), the last line in Table 18.3 coincides with the single entry in Table 18.1.

18.3.3.5 Mode and Median

Since the median defines the value of t for which the cumulative probability, from −∞ up to t, equals ½, one readily concludes that Med = 0,

18 544

in view of Eq. (18.497) – as t = 0 is the (constant) abscissa of the symmetry axis for the tdistribution; in other words, the median is given by the intercept of the vertical axis with the horizontal one in Fig. 18.6a – so the vertical intercept of the probability cumulative distribution curve in Fig. 18.6b is given by ½. With regard to the mode, the necessary condition as set forth by Eq. (18.37) hereby reads d t T ν dx

=0

18 545

x = Mod T

Insertion of Eq. (18.494) transforms Eq. (18.545) to d dx

Γ

ν+1 2 ν πν Γ 2

t2 1+ ν

ν+1

− 2

= 0,

18 546

where constants may be taken out and the power function differentiated to get Γ

ν+1 2 ν πν Γ 2

ν+1 − 2

t2 1+ ν

ν+1 − 2 −1 2t

ν

=0

18 547

– or, equivalently, t2 1+ ν

ν+3 − 2

t = 0,

18 548

after getting rid of constant (non-nil) factors between left- and right-hand sides. The only real value for t compatible with Eq. (18.548) is zero, so one concludes that Mod T = 0

18 549

Continuous Probability Functions

– and such a (common) maximum is apparent from inspection of all curves in Fig. 18.6a; hence, the mode as per Eq. (18.549) coincides with the median as per Eq. (18.544), which in turn coincides with the mean as per Eq. (18.500).

Fisher’s F-distribution

18.3.4

18.3.4.1 Probability Density Function

Consider U and V as independent random variables, both exhibiting χ 2-distributions, with ν1 and ν2 degrees of freedom, respectively, i.e. U

χ 2 U; ν1

18 550

V

χ 2 V ; ν2 ;

18 551

and due to their independence from each other, the associated joint probability density function, D{U,V}, will appear as the product of the associated probability density functions, according to ν1

−1 −

u

ν2

−1 −

v

ν1

−1

ν2

−1 −

u+v

e 2 v2 e 2 u2 v2 e 2 D U,V = D U D V = ν1 = ν1 + ν2 ν1 ν2 ν2 ν1 ν2 22 Γ 22 Γ Γ 2 2 Γ 2 2 2 2 u2

18 552

after Eq. (18.420) has been applied twice – with u and v denoting the associated independent variables. Definition of a new random variable W as U ν1 V ν2

W

18 553

is now in order, and D{U,V} may eventually become accessible in the form D{W,Z} following use of auxiliary variables w and z, defined as

w

u ν1 ν2 u v = ν1 v ν2

18 554

– which matches the functional form of Eq. (18.553) in terms of random variables, complemented with z

v

18 555

as trivial definition accompanying Z ≡ V. Equations (18.554) and (18.555) underlie a one-to-one transformation from (u, v) to (w, z) – with 0 < u < ∞ and 0 < v < ∞ due to their χ 2-distribution; and mapping to 0 < w < ∞ and 0 < z < ∞, after u

ν1 zw ν2

18 556

891

892

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

and v

18 557

z

have been obtained directly from Eqs. (18.554) and (18.555) as inverse transformation. The Jacobian determinant of the transformation consequently looks like

J

∂u ∂w ∂v ∂w

∂u ν1 ν1 z w ν1 ν1 ν1 ∂z = ν2 ν2 = z 1 −0 w = z, ν2 ν2 ν2 ∂v 0 1 ∂z

18 558

containing the partial derivatives of Eqs. (18.556) and (18.557), and taking advantage of the definition of a second-order determinant. The joint probability density function D{W,Z} may then be produced as a bivariate function of u and v, according to J

D W , Z = D U, V

18 559

– as long as D{U,V} is expressed in terms of the new variables w and z, as conveyed by Eqs. (18.556) and (18.557); combination with Eq. (18.552), complemented by Eq. (18.558) support transformation of Eq. (18.559) to ν1

−1

ν2

−1 −

u+v

v2 e 2 D W , Z = ν1 + ν2 ν1 ν2 Γ 2 2 Γ 2 2 u2

=

ν1 zw ν2

ν1 − 1 ν2 2 −1 z2

ν1 + ν2 2 2 Γ

ν1 ν2 Γ 2 2

ν1 z ν2

ν1 zw ,z ν2

u, v

, ν1 zw + z ν exp − 2 2

18 560

ν1 z ν2

with condensation of factors alike unfolding

D W ,Z =

ν1 ν2

ν1 2

ν1

w2

ν1 + ν2 2 2 Γ

−1

z

ν1 + ν2 −1 2

ν1 ν2 Γ 2 2

exp −

z ν1 1+ w 2 ν2

18 561

The marginal probability density function may be obtained via its definition as conveyed by Eq. (18.241), according to ∞ ∞

D W , Z dz =

D W

ν1 ν2

ν1 2

ν1

w2

ν1 + ν2 2 2 Γ

0 0

−1

z

ν1 + ν2 −1 2

ν1 ν2 Γ 2 2

exp −

z ν1 1+ w 2 ν2

dz

18 562

Continuous Probability Functions

Integration of Eq. (18.562) is facilitated by first defining an auxiliary variable, y, as z ν1 1+ w , 2 ν2

y

18 563

as suggested by the argument of the exponential function in the kernel; Eq. (18.563) obviously implies

dz =

2 ν1 dy 1+ w ν2

18 564 w

in differential form after retrieving z ≡ z{y}, as well as y

z=0

=0

18 565

and z

z





18 566

– which may be employed together to rewrite Eq. (18.562) as



ν1 ν1 2

ν2 D W = 0

ν1

ν1 + ν2 −1 2

2y ν1 1+ w ν2 ν1 + ν2 ν1 ν2 Γ 2 2 Γ 2 2 w2

−1

e −y

2 ν1 dy, 1+ w ν2

18 567

where lumping of factors alike and removal of y-independent functions from the kernel unfold ν1 ν1 2

ν1

2 ν1 ν2 1+ w ν2 D W = ν1 + ν2 ν1 ν2 Γ 2 2 Γ 2 2 w2

−1

ν1 ν1 2

=

ν1 + ν2 2

ν2 ν1 ν2 Γ Γ 2 2

ν1

w2

∞ ν1 + ν2 − 1 −y y 2 e dy 0

18 568

−1

ν1 1+ w ν2

ν1 + ν2 2

∞ ν1 + ν2 − 1 −y y 2 e dy 0

893

894

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

The outstanding integral in Eq. (18.568) is but gamma function of (ν1 + ν2)/2, see Eq. (12.401) – which allows reformulation to the more condensed, yet more informative form

F W ; ν1 , ν2 =

ν1 ν2

ν1 2

ν1 + ν2 2 ν1 ν2 Γ Γ 2 2

ν1

Γ

w2

−1

ν1 1+ w ν2

18 569

ν1 + ν2 2

Equation (18.569) represents Fisher’s F-distribution, with ν1 degrees of freedom in numerator and ν2 degrees of freedom in denominator, referring to the random variable described by Eq. (18.553) and used en lieu of D{W}; it often appears as

F W ; ν1 , ν2

ν1 ν2 + Γ 2 2 = ν1 ν2 Γ Γ 2 2

ν1 ν2

ν1 2

ν1 −1 w2

ν1 1+ w ν2

ν1 + ν2 2

ν1

ν1 2 ν2 = ν1 ν2 , Β 2 2

ν1

w2

−1

ν1 1+ w ν2

ν1 + ν2 , 2

18 570 after recalling the definition of beta function as per Eq. (18.505). One therefore concludes that χ 2 U; ν1 χ 2 V ; ν2

U ν1 F ;ν ,ν V 1 2 ν2

18 571

based on Eqs. (18.550), (18.551), (18.553), and (18.569) – so the ratio of a χ 2-distributed random variable to its number of degrees of freedom ν1, divided by the ratio of another χ 2-distributed random variable to its number of degrees of freedom ν2, produces a new random variable that follows Fisher’s F-distribution with ν1 and ν2 degrees of freedom; the pattern of variation of F with w, after Eq. (18.569), is depicted in Fig. 18.7a. The overall shape of the F-distribution probability density function resembles that of a χ 2-distribution, see Fig. 18.5; an increase in either ν1 or ν2 causes a right- and upward displacement of said curve – although such an effect appears stronger for ν1 than ν2. This statistical distribution was originally proposed by George W. Snedecor, an American mathematician and statistician of the twentieth century – who named it in honor of Sir Ronald A. Fisher, an English statistician and biologist.

18.3.4.2 Mean and Variance

The expected value of a random variable following an F-distribution becomes accessible from integration of Eq. (18.570) after Eq. (18.7), using w as weight factor, i.e.

Continuous Probability Functions

(b)

(c) 1.0

1.0

0.8

0.8 3

0.6

5

20 = ν2 10 5

20 = ν2

10

PF {W < w}

F {W; ν1, ν2}

(a)

0.4

0.6

3

0.4

0.2

0.2

0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 W

0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 w

(d)

1.0

1.0 20 = ν2 0.8

0.8 5

10

20 = ν2

10

PF {W < w}

F {W; ν1, ν2}

3 0.6

0.4

5

0.6 3 0.4

0.2

0.2

0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0

0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0

W

w

Figure 18.7 Graphical representation of Fisher’s F (a, b) probability density function, F{W; ν1, ν2}, and (c, d) probability cumulative function, PF(W < w), for (a, c) ν1 = 3 and (b, d) ν1 = 10, and selected values of ν2. ν1

E W

ν1 2 ∞ ν2 wF W ; ν1 , ν2 dw = ν1 ν2 −∞ , Β 2 2 ν1 ν1 2

ν = ν12 ν2 , Β 2 2



w 0

ν1 2



w 0

ν1 −1 2 ν1 + ν2 2 ν1

w 1+

ν2

dw

w

,

dw

ν1 1+ w ν2

ν1 + ν2 2

18 572

895

896

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where the defining interval [0,∞[ only, rather than ]-∞,∞[ was taken into constants were moved off the kernel, and factors alike were lumped; after auxiliary variable x as ν1 w x ν2 and, consequently, ν2 dw = dx, ν1 x

w=0

= 0,

account, defining 18 573

18 574 18 575

and x

w





18 576

as differential and asymptotic patterns, respectively, one may redo Eq. (18.572) to ν1

ν1 2 ν2 E W = ν1 ν2 , Β 2 2 ν2 ν1 = ν1 ν2 , Β 2 2

ν2 x ν1



ν1 2 ν2

1+x

0 ∞

x

ν1

dx

ν1 + ν2 2

ν1 + 1 −1 2

ν2 ν1 = ν1 ν2 , Β 2 2

1+x





ν1

x 2 dx 0

ν1 ν2 +1 + −1 2 2

1+x

ν1 + ν2 2

18 577

dx

0 ν1

ν1

ν1 2 ν2 2 with , and algebraic rearrangement of the kernel following cancellation of ν2 ν1 function. Equation (18.519) may now be taken advantage of to rewrite Eq. (18.577) simply as ν1 ν2 ν2 Β 2 + 1, 2 − 1 E W = 18 578 ν1 ν2 ν1 , Β 2 2 Recalling Eq. (18.505), one may reformulate Eq. (18.578) as ν1 ν2 ν1 ν2 ν2 Γ 2 + 2 Γ 2 + 1 Γ 2 −1 E W = ν1 Γ ν1 Γ ν2 Γ ν1 + 1 + ν2 −1 2 2 2 2 ν1 ν2 ν1 ν2 ν1 ν2 ν2 Γ 2 + 2 Γ 2 + 1 Γ 2 −1 ν2 Γ 2 + 1 Γ 2 −1 = = ν1 ν2 ν1 ν2 ν2 ν1 ν1 Γ ν1 Γ Γ + Γ Γ 2 2 2 2 2 2

,

18 579 where common factors were meanwhile dropped from numerator and denominator. Expansion of the gamma function as permitted by Eq. (12.412), followed by cancellation

Continuous Probability Functions

of common factors between numerator and denominator support further transformation of Eq. (18.579) to ν1 ν1 ν2 2 Γ 2 E W = ν1 Γ ν1 2

ν2 ν1 ν2 ν2 −1 ν 2 2 = ν21 = ν2 2 , ν2 ν2 −1 −1 −1 Γ −1 2 2 2 2 Γ

18 580

to eventually attain μ=

E W

ν2 ; ν2 > 2 ν2 − 2

18 581

based on Eq. (18.7), and after multiplying both numerator and denominator by 2. Note that E{W} cannot take a negative value, as long as it results from integration of E{W} > 0 as per Eq. (18.569), using 0 < w < ∞ as multiplying factor in the kernel; in fact, the mean is infinite when ν2 = 1 or ν2 = 2. By the same token, one may proceed to calculation of the second-order absolute moment after its definition via Eq. (18.18) with i = 2, viz. ∞

E W2

∞ −∞

w2 F w; ν1 , ν2 dw =

w2 0

ν1 ν2

ν1 2

ν1 ν2 , Β 2 2

w

ν1 −1 2

ν1 1+ w ν2

ν1 + ν2 dw 2

,



ν1 ν1 2

ν1

ν = ν12 ν2 , Β 2 2

w2 0

+1

dw

ν1 1+ w ν2

ν1 + ν2 2

18 582 after inserting Eq. (18.570) in the kernel and moving constants out of it; Eqs. (18.573)– (18.576) may be invoked once more to transform Eq. (18.582) to ∞

ν1

E W

2

ν1 2 ν2 = ν1 ν2 , Β 2 2 ν2 2 ν1 = ν1 ν2 , Β 2 2

ν2 x ν1

ν1 +1 2 ν2

ν1

dx

1+x

ν1 + ν2 2

ν1 + 2 −1 2

1+x

0 ∞

x



ν2 2 ν1 = ν1 ν2 , Β 2 2 ν1 ν2 +2 + −2 2 2



ν1

x2 0

+1

1+x

dx

ν1 + ν2 2

dx

0

18 583 ν1 ν1 + ν2 where factors alike were meanwhile lumped, while + 1 and were rewritten as 2 2 ν1 ν1 ν2 + 2 −1 and +2 + − 2 , respectively. Inspection of Eq. (18.583) vis-à-vis 2 2 2 with Eq. (18.519) permits reformulation to

897

898

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

E W2

ν2 2 ν1 ν2 + 2, −2 Β ν 2 2 , = 1 ν1 ν2 , Β 2 2

18 584

whereas Eq. (18.505) allows, in turn, transformation to

E W2

ν2 2 ν1 ν2 ν1 ν2 Γ + +2 Γ −2 Γ ν1 2 2 2 2 = ν1 ν2 ν1 ν2 Γ Γ +2 + −2 Γ 2 2 2 2

, ν2 2 ν1 ν2 ν1 ν2 ν2 2 ν1 ν2 Γ + +2 Γ −2 +2 Γ −2 Γ Γ ν1 2 2 2 2 ν1 2 2 = = ν1 ν2 ν1 ν2 ν1 ν2 Γ Γ Γ + Γ Γ 2 2 2 2 2 2 18 585 ν1 ν2 between numerator and denominator; in view + 2 2 of the fundamental property of the gamma function conveyed by Eq. (12.412), a final transformation of Eq. (18.585) is possible as together with cancellation of Γ

E W2

ν2 = ν1

2

ν2 ν1

2

=

ν2 ν1

2

=

ν1 ν1 +1 Γ +1 2 2 ν1 Γ 2 ν1 ν1 ν1 +1 Γ 2 2 2 ν1 Γ 2 ν1 ν1 +1 2 2 = ν2 ν2 −1 −2 2 2

ν2 −2 2 ν2 ν2 −1 Γ −1 2 2 ν2 −2 Γ 2 ν2 ν2 ν2 −1 −2 Γ −2 2 2 2 Γ

ν2 ν1

2

ν1 ν1 + 2 ν22 ν1 + 2 = ν2 − 2 ν2 − 4 ν1 ν2 −2 ν2 −4 18 586

ν1 + 2 and – after double application, in a sequential and independent basis, to Γ 2 ν2 , followed by crossing out of factors alike and multiplication of both numerator Γ 2 and denominator twice by 2. Recalling the definition of variance as given by Eq. (18.15), and of moments as per Eq. (18.18) – coupled with the relationship labeled as Eq. (18.28), one may write σ 2 = E W 2 −E 2 W ;

18 587

insertion of Eqs. (18.581) and (18.586), complemented by a number of elementary algebraic operations of fraction lumping, parameter elimination, and factorization generate, as a whole,

Continuous Probability Functions

σ2 =

= =

ν22 ν1 + 2 ν2 2 ν22 ν1 + 2 ν2 −2 − ν22 ν1 ν2 −4 − = ν1 ν2 −2 ν2 −4 ν2 − 2 ν1 ν2 − 2 2 ν2 −4 ν22

ν1 + 2 ν2 − 2 −ν1 ν2 −4 ν1 ν2 − 2

2

=

ν2 − 4

ν22 ν1 ν2 − 2ν1 + 2ν2 − 4 −ν1 ν2 + 4ν1 ν1 ν2 − 2 2 ν2 − 4

ν22 2ν1 + 2ν2 − 4 ν1 ν2 −2 2 ν2 − 4 18 588

– valid obviously when ν2 > 4; Eq. (18.588) becomes Var W

σ2 =

2 ν2 2 ν1 + ν2 −2 ν1 ν2 − 2

2

ν2 − 4

; ν2 > 4

18 589

after factoring 2 out, because σ 2 > 0 by definition – which actually implies σ 2 = ∞ for ν2 = 1, ν2 = 2, ν2 = 3, or ν4 = 4. Although Eq. (18.18) may be invoked to produce the corresponding moment, it is not universally valid – i.e. an analytical expression is found for the kth order moment that is finite only when ν2 > 2k, thus obviously excluding ν2 = 1, 2 for k = 1 as per Eq. (15.581), and ν2 = 1, 2, 3, 4 for k = 2 as per Eq. (18.589); hence, it can be stated that no momentgenerating function exists for the F-distribution. Consequently, a methodology resorting to direct integration as followed above to obtain E{X} ≡ μ1 and E{X2} ≡ μ2 must be followed to generate higher-order moments – bearing in mind the stated constraint of finiteness.

18.3.4.3 Asymptotic Behavior

The asymptotic situation of the numbers of degrees of freedom being very large may, to advantage, consider just the case of even integer values for ν1 and ν2, i.e. ν1

2n

18 590

ν2

2m

18 591

and – where n and m denote generic positive integers; in fact, for every odd value of ν1 or ν2, it is always possible to find an n and an m such that 2n > ν1 and 2m > ν2, so 2n ∞ when ν1 ∞ and 2m ∞ when ν2 ∞ – which facilitates search for the said asymptotic pattern. Under these circumstances, Eq. (18.569) will look like

F W ; ν1 , ν2

2n 2m + Γ 2 2 = ν1 = 2n 2n 2m ν2 = 2m Γ Γ 2 2

=

Γ n+m Γ n Γ m

2n 2m

2n 2

w

2n w 1+ 2m

2n n n− 1 w 2m n n+m 1+ w m

2n −1 2 2n + 2m 2

18 592

899

900

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

upon combination with Eqs. (18.590) and (18.591). In view of Eq. (12.400), one may algebraically redo Eq. (18.592) to F W ; ν1 , ν2

ν1 = 2n ν2 = 2m

n + m−1 n−1 m−1

=

2n n n −1 w 2m nw n + m 1+ m

18 593 n− 1

2n n + m −1 2m n m −1

1 2nw nw m nw n n− 1 1 + 1+ m m – where splitting or condensation of powers meanwhile toook place as appropriate. A change of variable to =

2nw

y

18 594

permits calculation of the probability density function of Y from that of W via F W ; ν1 , ν2 = F Y ; ν1 , ν2

J

18 595

– where the absolute value of the Jacobian determinant reduces, in this case, to J

dy = 2n = 2n, dw

18 596

where Eq. (18.594) was taken into account. Equation (18.595) may thus be rewritten as F W ; ν1 , ν2 F Y ; ν 1 , ν2

ν1 = 2n ν2 = 2m

=

=

=

ν1 = 2n ν2 = 2m

J 1 2n n + m − 1 2n 2m n m − 1 n + m−1 2m n m −1

=

1 F W ; ν1 , ν2 2n

1 nw 1+ m

ν1 = 2n ν2 = 2m

2nw m

n− 1

n −1

1+

nw m

n

18 597

y n− 1

1 y 1+ 2 m

y 1+ 2 m

m

n −1

n

following combination with Eqs. (18.593), (18.594), and (18.596), along with eventual dropping of 2n from both numerator and denominator. When m grows without limit, Eq. (18.597) becomes lim F Y ; ν1 , ν2

m



ν1 = 2n ν2 = 2m

=

lim

m ∞

n + m−1 2m n m− 1 lim

m ∞

=

lim

m ∞

n + m−1 2m n m− 1

y n −1

1

1 1n

y 2 1+ m

yn − 1 n−1

y e2

=

n

n−1

lim

m ∞

lim

m



y 2 1+ m

n + m −1 2m n m − 1

m

, y − y n− 1 e 2

n− 1

18 598

Continuous Probability Functions

at the expense of definition of Neper’s number as per Eq. (12.173); the asymptotic form of the first factor in Eq. (18.598) may be found via lim



m

m + n −1 n + m −1 = lim n m ∞ 2m m −1

m + n−2

m + n−1

m + n− 2

= lim

… m + 1 m m−1

m −1 … m+1 m

2m 2m … 2m



m

2m

n

,

n −1

n −1 m+j 1 m+j = lim = lim m ∞ m ∞ 2m 2 m j=0 j=0 j=0 n−1

=

1 2

n

m 1 n −1 1 1 = n 1 = n 1n = n ∞ m 2 2 2 j=0 j=0 n− 1

lim

m

18 599 upon dropping (m − 1)! from both numerator and denominator, recalling Eq. (9.108), and realizing that m + j ≈ m for finite j (below n) and infinite m. Equation (18.598) gives rise to y

lim F Y ; ν1 , ν2 ∞

m

2n

y

ν1

y

ν1

y

1 y n− 1 e − 2 1 y 2 −1 e − 2 1 y 2 − 1e − 2 1 y 2 − 1e − 2 = = = = ν1 = 2n ν1 ν1 ν1 2n 2n v1 2n n − 1 ν2 = 2m −1 22 22 22 Γ −1 2 2 2

18 600 upon insertion of Eq. (18.599), followed by elementary algebraic rearrangement – as well as combination with Eqs. (12.400) and (18.590). Comparative inspection of Eqs. (18.420) and (18.600) indicates that lim F Y ; ν1 , ν2

m



ν2 = 2m

= χ 2 Y ; ν1 ,

18 601

with the aid also of Eq. (18.591) – which finally leads to lim F ν1 W ; ν1 , ν2 = χ 2 ν1 W ; ν1 ,

ν2

since Y

18 602



2nW

ν1 W

18 603

stemming from Eqs. (18.590) and (18.594). In other words, the asymptotic version of the F-distribution of W with ν1 and ν2 degrees of freedom, when ν2 grows unbounded, is but the χ 2-distribution of ν1W with ν1 degrees of freedom. Once in possession of Eq. (18.602), one may resort to lim F ν1 W ; ν1 , ν2 = lim χ 2 ν1 W ; ν1

ν1 ν2

∞ ∞

ν1



18 604

in attempts to ascertain the asymptotic behavior of F when both ν1 and ν2 grow unbounded; combination with Eq. (18.451) then yields lim F ν1 W ; ν1 , ν2 = N ν1 W ; ν1 ,

ν1 ν2

∞ ∞

2ν1

18 605

with the aid of Eqs. (18.433) and (18.438), after setting with ν = ν1, and thus σ = σ 2 = 2ν 2ν1 . Hence, the effect of ν2 eventually disappears, while that of ν1 is retained – with the central limit theorem being once again found valid, as both numbers of degrees of freedom of the original F-distribution grow unbounded.

901

902

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

18.3.4.4 Probability Cumulative Function

With regard to the probability cumulative function associated with an F-distribution, one has it that ν1

ν1 2 w ν2 F w; ν1 , ν2 dw = ν1 ν2 0 , Β 2 2

PF W < w

w

w 0

ν1 2 −1

ν1 1+ w ν2



ν1 + ν2 2

dw, 18 606

since [0, ∞[ serves as domain in Eq. (18.3), and upon retrieval of Eq. (18.570); numerical handling will be required from now on, which will eventually lead to ν1 2ν1 2 − 1

PF W < w =

w ν2

ν1 2 2 H1

ν1 + ν2 ν1 ν1 ν1 , ,1+ ; − w 2 2 2 v2 ν1 ν2 , Β 2 2

18 607

– where the hypergeometric function, defined via Eq. (18.541), was again taken advantage of (with specific derivation to be provided at a later stage). The evolution of PF{W < w} with w is plotted in Fig. 18.7c and d; a horizontal asymptote with unit vertical intercept drives all curves when w grows unbounded (as expected) – with sigmoidicity being enhanced by higher ν1 or ν2. The quantiles of this distribution are listed in Table 18.4, for selected significance levels, α. Once again, the lack of symmetry of the F-distribution leads to disparate upper, wcrt,1 (see left columns), and lower, wcrt,2 (see right columns), boundaries for the w-interval with area under the curve equal to α; the left column, containing wcrt,1, should be used for unilateral tests of the type w < wcrt,1 – while the right column, with wcrt,2, is meant for tests that read w > wcrt,2. In the bilateral case, tests look like w wcrt – in which case both the left and right columns are germane, via wcrt,1 (that accounts for an area equal to α/2, placed below the curve and to the left of wcrt,1) and wcrt,2 (that is associated to an area also equal to α/2, located between the horizontal axis, the curve itself, and to the right of wcrt,2). Table 18.4v coincides with Table 18.2, after setting ν = ν1 and dividing the entries of the latter by ν1 itself – in full agreement with Eq. (18.602), pertaining to ν1W as random variable.

18.3.4.5 Mode and Median

One may tackle the mode of F-distribution via differentiating Eq. (18.569) with regard to w and setting the result equal to zero, viz.

d dw

ν1 ν2

ν1 2

ν1 + ν2 ν1 ν1 2 −1 2 w 1+ w ν1 ν2 ν2 Γ Γ 2 2 Γ



ν1 + ν2 2

= 0,

18 608

w = Mod

following the procedure outlined by Eq. (18.37); transformation of Eq. (18.608) becomes possible via the classical rules of differentiation of a product and a power, viz.

Continuous Probability Functions

Table 18.4 Critical, unilateral and bilateral, quantiles of F-statistic, Fcrt, for two levels of significance and (i) ν2 = 3, (ii) ν2 = 5, (iii) ν2 = 10, (iv) ν2 = 20, and (v) ν2 = ∞. Significance level (α) Unilateral

Number of degrees of freedom (ν1)

0.05

Bilateral 0.01

0.05

0.01

i 1

0.005

2

0.052

3

0.108

4 5

10.1

0.000

34.1

0.001

17.4

0.000

55.6

9.55

0.010

30.8

0.026

16.0

0.005

49.8

9.28

0.034

29.5

0.065

15.4

0.021

47.5

0.152

9.12

0.060

28.7

0.100

15.1

0.041

46.2

0.185

9.01

0.083

28.2

0.129

14.9

0.061

45.4

10

0.270

8.79

0.153

27.2

0.207

14.4

0.124

43.7

15

0.304

8.70

0.185

26.9

0.241

14.3

0.154

43.1

20

0.323

8.66

0.202

26.7

0.259

14.2

0.172

42.8

30

0.342

8.62

0.222

26.5

0.279

14.1

0.191

42.5

40

0.352

8.59

0.232

26.4

0.289

14.0

0.200

42.3

100

0.370

8.55

0.251

26.2

0.308

14.0

0.222

42.2

200

0.377

8.54

0.258

26.2

0.314

13.9

0.234

42.0

1

0.004

6.61

0.000

16.3

0.001

10.0

0.000

22.8

2

0.052

5.79

0.108

13.3

0.025

8.43

0.052

18.3

3

0.111

5.41

0.035

12.1

0.067

7.76

0.022

16.5

4

0.160

5.19

0.065

11.4

0.107

7.39

0.044

15.6

ii

5

0.198

5.05

0.091

11.0

0.140

7.15

0.067

14.9

10

0.303

4.74

0.177

10.1

0.236

6.62

0.146

13.6

15

0.345

4.62

0.219

9.73

0.279

6.43

0.186

13.1

20

0.369

4.56

0.244

9.55

0.304

6.33

0.210

12.9

30

0.791

4.50

0.270

9.38

0.330

6.23

0.236

12.7

40

0.408

4.46

0.285

9.29

0.345

6.18

0.256

12.5

100

0.433

4.41

0.312

9.13

0.370

6.08

0.267

12.4

200

0.442

4.39

0.323

9.08

0.380

6.05

0.286

12.2

1

0.004

4.96

0.000

0.001

6.94

0.000

12.8

2

0.052

4.10

0.011

7.56

0.025

5.46

0.005

9.43

3

0.114

3.71

0.037

6.55

0.069

4.83

0.023

8.08

4

0.168

3.48

0.069

5.99

0.113

4.47

0.048

7.34

5

0.211

3.33

0.099

5.64

0.151

4.24

0.074

6.87

iii 10.0

(Continued)

903

904

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Table 18.4 (Continued) Significance level (α) Unilateral

Number of degrees of freedom (ν1)

0.05

Bilateral 0.01

0.05

0.01

10

0.336

2.98

0.206

4.85

0.269

3.72

0.171

5.85

15

0.394

2.85

0.263

4.55

0.327

3.53

0.226

5.47

20

0.426

2.77

0.297

4.41

0.361

3.42

0.260

5.27

30

0.463

2.70

0.336

4.25

0.398

3.31

0.299

5.07

40

0.481

2.66

0.357

4.17

0.418

3.26

0.321

4.97

100

0.518

2.59

0.400

4.01

0.459

3.15

0.369

4.85

200

0.532

2.56

0.415

3.96

0.474

3.12

0.385

4.75

1

0.004

4.35

0.000

8.10

0.001

5.87

0.000

9.94

2

0.052

3.49

0.011

5.85

0.025

4.46

0.005

6.99

3

0.115

3.10

0.037

4.94

0.070

3.86

0.023

5.82

4

0.172

2.87

0.071

4.43

0.117

3.51

0.050

5.17

iv

5

0.219

2.71

0.105

4.10

0.158

3.29

0.078

4.76

10

0.361

2.35

0.227

3.37

0.292

2.77

0.190

3.85

15

0.429

2.20

0.297

3.09

0.362

2.58

0.258

3.50

20

0.472

2.12

0.340

2.94

0.407

2.46

0.301

3.32

30

0.518

2.04

0.392

2.78

0.455

2.35

0.355

3.12

40

0.543

1.99

0.422

2.69

0.483

2.29

0.385

3.02

100

0.595

1.91

0.483

2.54

0.541

2.17

0.457

2.91

200

0.617

1.88

0.508

2.48

0.562

2.13

0.502

2.80

1

0.004

3.84

0.000

6.63

0.001

5.02

0.000

7.88

2

0.052

3.00

0.010

4.61

0.025

3.69

0.005

5.30

3

0.117

2.60

0.038

3.78

0.072

3.12

0.024

4.28

4

0.178

2.37

0.074

3.32

0.121

2.79

0.052

3.72

5

0.229

2.21

0.111

3.02

0.166

2.57

0.082

3.35

10

0.394

1.83

0.256

2.32

0.325

2.05

0.216

2.52

15

0.484

1.67

0.349

2.04

0.417

1.83

0.307

2.19

20

0.543

1.57

0.413

1.88

0.480

1.71

0.372

2.00

30

0.616

1.46

0.498

1.70

0.560

1.57

0.460

1.79

40

0.663

1.39

0.554

1.59

0.611

1.48

0.518

1.67

60

0.720

1.32

0.625

1.47

0.675

1.39

0.592

1.53

100

0.779

1.24

0.701

1.36

0.742

1.30

0.673

1.40

v

Continuous Probability Functions

ν1 ν2

ν1 2

ν1 + ν2 Γ 2 ν1 ν2 Γ Γ 2 2

ν1 ν1 ν1 −2 −1 w 2 1+ w 2 ν2 ν1

+w2

−1



ν1 + ν2 2



1+

ν1 + ν2 2

ν1 w ν2



ν1 + ν2 −1 2 ν1

=0

ν2 18 609

After dropping constant factors from both sides, as well as w

ν1 2 −2

and 1 +

ν1 ν2 w



ν1 + ν2 2 −1

,

Eq. (18.609) simplifies to ν1 −1 2

1+

ν1 ν1 + ν2 ν1 w −w = 0; ν2 2 ν2

18 610

multiplication of both sides by ν2, followed by application of the distributive property to the first term then unfolds ν1 ν1 ν1 + ν2 − 1 ν2 + − 1 ν1 w − w ν1 = 0 2 2 2

18 611

Isolation of terms on w in the left-hand side, and factoring out of ν1 afterward produce ν1

ν1 ν2 ν1 ν1 + − + 1 w = ν2 −1 2 2 2 2

18 612

from Eq. (18.611), or else ν2 w=

ν1

ν1 −1 ν2 ν1 −2 2 = ν2 ν1 ν2 + 2 +1 2

18 613

upon solving for w – along with cancellation of symmetrical terms, complemented by multiplication of both numerator and denominator by 2; one finally gets Mod W = max 0,

ν2 ν1 − 2 ν1 ν2 + 2

,

18 614

meaning that the mode will be set by the restriction w ≥ 0 when ν1 ≤ 2 – in which case the probability density functions do monotonically decrease with w, while a true local maximum will be at stake otherwise.

905

906

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

On the other hand, the median will become accessible via 2ν1

ν1 2 −1

w ν2

ν1 2

ν1 + ν2 ν1 ν1 ν1 , ,1 + ;− w 2 2 2 ν2 ν1 ν2 , Β 2 2

2 H1

1 = , 2

18 615

w = Med

following combination of Eqs. (18.4) and (18.39), or Eq. (18.46) with Eq. (18.607) for that matter – which may be rearranged to read Med ν2

ν1 2

2 H1

ν1 ν2 , Β ν1 + ν2 ν1 ν1 ν1 2 2 , , 1 + ; − Med = ν1 2 2 2 ν2 −1 4ν12

18 616

Unfortunately, no explicit analytical expression, or even approximation is currently available – so numerical computation is necessary, on a case-by-case basis, to solve the above implicit equation for Med. 18.3.4.6 Other Features

Consider an F-distributed random variable W, with probability density function abiding to Eq. (18.569); if a new random variable Y, defined as 1 , Y 18 617 W is introduced, associated obviously with 1 y 18 618 w in terms of actual value held, then one may reverse said definition to obtain 1 w= 18 619 y The (modulus of the) Jacobian determinant associated to Eq. (18.619) will consequently read dw 1 1 = − 2 = 2, dy y y

J

18 620

so the probability density function of Y will abide to D Y =D W

J

18 621

– which transforms to

D Y =

ν1 ν2

ν1 2

ν1 + ν2 Γ 2 ν1 ν2 Γ Γ 2 2

ν1 w 2 −1 ν1 + ν2 2 ν1

1+

ν2

1 = y2

w

w

1 y

ν1 ν2

ν1 2

ν1 + ν2 Γ 2 ν1 ν2 Γ Γ 2 2

1 y

ν1 2 −1

ν1 1 1+ ν2 y

1 y2

ν1 + ν2 , 2

18 622

Continuous Probability Functions

following insertion of Eqs. (18.569), (18.619), and (18.620); after pooling similar factors together, Eq. (18.622) turns to ν1 ν2

ν1 2

ν1 + ν2 Γ 2 ν1 ν2 Γ Γ 2 2

D Y =

ν1 y1− 2 −2 ν1 + ν2 ν1 + ν2 y 2

ν1 2

ν1 ν2

ν1 + ν2 ν1 + ν2 ν1 y − 2 −1 ν2 y 2 2 ν1 + ν2 , ν1 ν2 Γ Γ 2 ν + ν y 1 2 2 2

=

ν2 y

Γ

18 623 where cancellation of common factors between numerator and denominator unfolds

D Y =

ν1 ν2

ν1 2

ν1 + ν2 2 Γ

ν1 + ν2 2 ν1 ν2 Γ Γ 2 2 ν2

ν1 ν1 ν2 2 − 1y 2 + 2 ν1 + ν2 ν1 + ν2 y 2

y−

ν1 ν2 ν1 + ν2 ν1 2 ν2 2 Γ 2 = ν1 ν2 Γ Γ 2 2

ν2

y2

−1

ν1 + ν2 y

ν1 + ν2 2

18 624 Division of both numerator and denominator by

D Y =

ν1 ν1 2 ν1 + ν2 ν1 2

Γ

ν2 2

ν2 Γ

ν1 + ν2 2

ν1 ν2 Γ 2 2

ν2 −1 y2

ν2 1+ y ν1

ν1 + ν2 ν1 2

ν1 + ν2 2

=

ν2 ν1

converts Eq. (18.624) to ν2 2

ν2 + ν1 2 ν2 ν1 Γ Γ 2 2 Γ

ν2

y2

−1

ν2 1+ y ν1

ν2 + ν1 , 2

F Y ; ν2 , ν1 18 625 where comparison was again effected with the functional form of Eq. (18.569); in view of the definition of Y as per Eq. (18.617), one concludes that F ν2 , ν1 ; 1 −α =

1 , F ν 1 , ν2 ; α

18 626

a result that may be confirmed by inspection of Table 18.4 – for values of ν1 and ν2 made available under a given significance level (say, α) read in the left column, visà-vis with the reciprocal of the value characterized by the same numbers of degrees of freedom but in reverse order, as read in the right column (and thus corresponding to 1 − α). Since α is, by definition, P{W ≤ F{ν1, ν2; α}} = P{1/W ≥ 1/F{ν1, ν2; α}}, one may recall Eq. (18.4) to write α = 1 − P{1/W ≤ 1/F{ν1, ν2; α}} instead; retrieval of Eq. (18.617) then supports α = 1 − P{Y ≤ 1/F{ν1, ν2; α}}, or else F{ν2, ν1; 1/F{ν1, ν2; α}} = P{Y ≤ 1/F{ν1, ν2; α}} = 1 − α after Eq. (18.626) – which is equivalent to state F{ν2, ν1; 1−α} = 1/F{ν1, ν2; α}, thus justifying exchange of α for 1 − α in Eq. (18.626) when ν1 and ν2 were swapped. Departing again from an F-distributed random variable, W, one may follow a similar rationale to investigate how its square root, Y, will be distributed – by resorting now to auxiliary variable Y such that y

w;

18 627

907

908

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

therefore, the new distribution D{Y; ν1, ν2} will become accessible as dy 1 D Y ; ν1 , ν2 = dw 2 w

F W ; ν1 , ν2 = D Y ; ν1 , ν2

18 628

– using Eqs. (18.620) and (18.621) as template. The information conveyed by Eqs. (18.569) and (18.627) may then be taken advantage of to transform Eq. (18.628) to

D Y ; ν1 , ν2

ν1 2

ν1 ν2

ν1 + ν2 2 =2 w F W =2 w ν1 ν2 Γ Γ 2 2 ν1 ν1 2

=

ν1 + ν2 2 ν1 ν2 Γ Γ 2 2

ν2

Γ

ν1 −1 2 ν1 + ν2 ν1 2 2

2y y2 1+

ν2

ν1

Γ

w2

−1

ν1 1+ w ν2 ν1 ν1 2

ν1 + ν2 2

,

ν1 + ν2 2 ν1 ν2 Γ Γ 2 2

ν2

=

y

Γ

2y ν1 −1 y2 1 + ν1 ν2

ν1 + ν2 2

18 629 where factors in y were duly collapsed. In the particular case of ν1 = 1, Eq. (18.629) reduces to

D Y ; ν1 , ν2

ν1 = 1

1 2

1 + ν2 2 1 ν2 Γ Γ 2 2

1 ν2

=

Γ

ν2 + 1 Γ 2

=

ν2

1 2

2y 0 y2 1+1 ν2 2

ν2 πΓ 2

2

y 1+ ν2

ν2 + 1 2

1 + ν2 2

, ν2 + 1 Γ 2

=2

ν2 πν2 Γ 2

y2 1+ ν2

ν2 + 1 2

18 630 also with the aid of Eq. (12.423); inspection of the form of Eq. (18.630) comparatively with Eq. (18.494) permits reformulation to D

W ; ν1 , ν2

ν1 = 1

D Y

2t Y ; ν2

2t

W ; ν2 ,

18 631

when ν is replaced by ν2, and also with the help of Eq. (18.627) pertaining to random variable Y. After rewriting Eq. (18.627) as Y

18 632

W

applying to the random variable at large, one may reformulate Eq. (18.631) to F Y 2 ; 1, ν2 ; α

t 2 Y ; ν2 ;

α , 2

18 633

Continuous Probability Functions

knowing that W ≡ Y2 holds an F-distribution in the first place. Note that the range of a random variable exhibiting a (symmetrical) t-distribution extends from –∞ to ∞, whereas an F-distribution holds only [0,∞[ for domain; doubling the probability value of the t-distribution as enforced by Eq. (18.631) is thus equivalent to considering twice its right portion that spans [0,∞[ – so α/2 is to be taken as (unilateral) significance level, when α is taken for significance level of the corresponding F-distribution. As a result of Eq. (18.633), the entries in the first row (i.e. ν1 = 1) and second column of Table 18.4 for each ν2, corresponding to the unilateral case (i.e. α = 0.05 or α = 0.01), coincide with the square of the entries in the row corresponding to ν = ν2 in Table 18.3, pertaining to the bilateral case (i.e. α = 0.025 and α = 0.005, respectively). An alternative derivation of this important result may instead resort to Eq. (18.478), revisited here as W2 T2 = 18 634 V ν after squaring both sides; remember that W follows, as per Eq. (18.475), a normal distribution, so W2 follows necessarily a χ 2-distribution with a single degree of freedom, as enforced by Eq. (18.469) – whereas V follows, as per Eq. (18.476), a χ 2-distribution with ν degrees of freedom. After redoing Eq. (18.634) to W2 18 635 T2 = 1 , V ν one attains the form conveyed by Eq. (18.553) for the ratio of two χ 2-distributed random variables, each one normalized by its own number of degrees of freedom; hence, one concludes that T2 follows indeed an F-distribution, with 1 and ν degrees of freedom, i.e. t N W ; 0, 1

χ2 W 2; 1

χ2 V ; ν

χ2 V ; ν

W ;ν V ν ,

18 636

2

W ; 1, ν V ν sequentially consistent with Eqs. (18.571) and (18.495). Combination of Eqs. (18.635) and (18.636) is equivalent to stating that F

F X 2 ; 1, ν = t 2 X; ν

18 637

for a generic random variable X – which is similar to Eq. (18.633). The result conveyed by Eq. (18.637), or Eq. (18.633) for that matter, is quite relevant as it sets the basis for multivariate statistical analysis – to be discussed later to further detail. Recall that the central limit theorem, as per Eq. (18.283), guarantees that N

lim

N



Xi − Nμ

i=1

Nσ 2

N X; 0, 1 ,

18 638

909

910

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in the form of a standard normal distribution as per Eqs. (18.253) and (18.258) – arising asymptotically from N identically distributed random variables Xi, sharing the the same mean Nμ and the same variance Nσ 2; this theorem has been validated above for χ 2-distributed variables – see Eq. (18.451), resorting to a rationale based on the moment-generating function. Under such asymptotic conditions, Eq. (18.280) may be used to write N

Xi lim

N

i=1



X

N

N X; μ,

σ2 ; N

18 639

Eq. (18.639) implies, in turn, N

Xi i=1

μ, 18 640 N because the actual variance, σ 2/N, becomes negligible with unbounded N. On the other hand, one may revisit Eq. (18.553) that conveys the definition of an F-distributed random variable, W, at the expense of two χ 2-distributed random variables, U and V, with ν1 and ν2 degrees of freedom, respectively, in the form lim

N



U V ν2

ν1 W =

18 641

– after having multiplied both sides by ν1. In view of Eq. (18.474), one may express V as the sum of ν2-independent and identically distributed χ 2-variables Xi, i.e. ν2

V = ν2

Xi i=1

,

ν2

18 642

where each such variable possesses a single degree of freedom, i.e. χ 2 Xi ; 1 ,

Xi

18 643

or else νi = 1. When N ≡ ν2 is sufficiently large, Eq. (18.638) holds – and so does Eq. (18.640), which may be rephrased as ν2

Xi lim

ν2

i=1

ν2



= μ,

18 644

or else lim

ν2

V

∞ ν2



18 645

in view of Eq. (18.642); combination of Eqs. (18.433) and (18.643) then supports E Xi

μ = νi = 1,

18 646

Continuous Probability Functions

so Eq. (18.645) reduces to merely V =1 lim ν2 ∞ ν 2

18 647

Insertion of Eq. (18.647) transforms Eq. (18.641) to lim ν1 W

ν2

χ 2 ν1 W ; ν1 ,



18 648

because U/(V/ν2) then reduces to U/1 = U, which is χ 2-distributed with ν1 degrees of freedom (by hypothesis); as expected, Eq. (16.648) corroborates the result conveyed by Eqs. (18.553) and (18.571). A final result worth deriving pertains to Euler’s (integral) representation of the hypergeometric function; one should start by applying the property of Pochhammer’s symbol conveyed by Eq. (2.267), to both numerator and denominator of (ψ)i/(ζ)i, according to Γ ψ +i ψ i Γ ζ Γ ζ −ψ Γ ψ + i Γ ψ = = ζ i Γ ζ+i Γ ψ Γ ζ −ψ Γ ζ + i Γ ζ ,

Γ ζ Γ ζ −ψ Γ ψ + i = Γ ψ Γ ζ −ψ Γ ζ+i =

18 649

Γ ψ + ζ−ψ Γ ψ + i Γ ζ−ψ Γ ψ Γ ζ −ψ Γ ψ + i + ζ − ψ

together with multiplication and division by Γ{ζ − ψ}, and addition and subtraction of ψ to/from the arguments of Γ{ζ} and Γ{ζ + i}. One may now take advantage of Eq. (18.505) to further transform both fractions in Eq. (18.649) and thus get ψ i 1 Β ψ + i, ζ − ψ = ζ i Β ψ, ζ −ψ

18 650

At this stage, one should recall that the product of Γ{ξ} by Γ{ζ} may be written as Γ ξ Γ ζ =

∞ 0

t ξ− 1 e −t dt

∞ 0

s ζ − 1 e −s ds =

∞ ∞ 0

e−

t+s

t ξ− 1 s ζ −1 dtds,

18 651

0

after having taken Eq. (18.403) on board – besides applying Fubini’s theorem and lumping exponential functions; if t

xy

18 652

s

x 1− y

18 653

and are introduced as auxiliary variables, then the absolute value of the corresponding Jacobian determinant becomes

J

∂t ∂x ∂s ∂x

∂t y x ∂y = = y −x − 1 − y x = − xy− x + xy = − x = x ∂s 1 −y − x ∂y 18 654

911

912

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

– so one readily obtains dtds = J dxdy = xdxdy

18 655

as nuclear tool for double change in integration variables. On the other hand, ordered addition of Eqs. (18.652) and (18.653) unfolds t + s = xy + x 1 − y = xy + x−xy = x;

18 656

whereas ordered division thereof gives rise t xy y = , = s x 1 −y 1− y

18 657

which eventually yields

y=

t s

t t = s+t 1+ s

18 658

upon isolation of y, and multiplication of numerator and denominator by s. Therefore, 0 < t < ∞ and 0 < s < ∞ as per Eq. (18.651) imply 0 < y < 1 and 0 < x < ∞ – since x 0 when s 0 and t 0, and x ∞ when s ∞ or t ∞ as per Eq. (18.656), complemented by y 0 when t 0 or s ∞, and y 1 when t ∞ as per Eq. (18.658). Combination with Eqs. (18.652), (18.653), (18.655), and (18.656) supports transformation of Eq. (18.651) to Γ ξ Γ ζ =

1 ∞

e − x xy

ξ −1

ζ−1

x 1 −y

xdxdy

0 0 1 ∞

=

e − x x ξ −1 y ξ −1 x ζ − 1 1− y

ζ−1

xdxdy

0 0 1 ∞

=

18 659 e − x x ξ + ζ −1 y ξ− 1 1 − y

ζ −1

dxdy

0 0 ∞

=

1

e − x x ξ + ζ −1 dx y ξ− 1 1− y

0

ζ−1

dy,

0

– where factors alike were meanwhile collapsed, and Fubini’s theorem was applied again; in view of Eq. (18.403), one realizes that the first integral in Eq. (18.659) is but Γ{ξ + ζ}, i.e. Γ ξ Γ ζ =Γ ξ+ζ

1

y ξ −1 1 −y

ζ −1

dy,

18 660

0

while division of both sides by Γ{ξ + ζ} gives, in turn, rise to Γ ξ Γ ζ = Β ξ, ζ = Γ ξ+ζ

1 0

y ξ −1 1 −y

ζ−1

dy

18 661

Continuous Probability Functions

at the expense of Eq. (18.505). The result conveyed by Eq. (18.661) may then be used to reformulate Eq. (18.650) to ψ i 1 = ζ i Β ψ, ζ −ψ

1

yψ + i − 1 1 − y

ζ −ψ − 1

18 662

dy

0

The hypergeometric function, as per Eq. (18.541), may then look like 2 H1

ξ,ψ, ζ; x =

∞ i=0

ξ ixi ψ i = i ζ i



ξ ixi 1 i Β ψ, ζ −ψ

i=0

1

=

1 Β ψ, ζ −ψ



ζ −ψ − 1

y ψ −1 1 − y

ξ

i=0

0

1

y ψ −1 y i 1 −y

ζ −ψ −1

dy

0

xy

i

i

i

dy 18 663

with the aid of Eq. (18.662) – where the constant factor was taken off the kernel, integral and summation signs were exchanged for their linearity, and xi and yi collapsed. Equation (2.278) may now be revisited as 1 − xy

−ξ



= i=0

ξ

xy

i

i

18 664

i

after setting x equal to xy and z equal to ξ, so Eq. (18.663) will end up as 2 H1

ξ, ψ, ζ; x =

1

1 Β ψ, ζ − ψ

y ψ −1 1 −y

ζ −ψ −1

1 −xy

−ξ

18 665

dy

0

Once in possession of Eq. (18.665), one may rewrite Eq. (18.606) as ν1

ν1 2 ν2 PF W < w = ν1 ν2 , Β 2 2

1

ν1 −1 w2

ν

1 w 2 −1 ν1 w 1+ w w ν2 w



ν1 + ν2 2

wd

w w

0

ν1

1

ν1 2 ν1 ν2 = ν1 ν2 w 2 , Β 2 2

ν

1 w 2 −1 ν1 w 1− − w w ν2 w



ν1 + ν2 2

d

w w

0

18 666 ν1 – following multiplication and division by w or w 2 − 1 ν1 w 2 be taken off the kernel after lumping w−1 with 0

1−

w w

(as appropriate), which then allows w; multiplication of the kernel by

= 1 converts Eq. (18.666) to ν1

ν1 2 ν1 ν PF W < w = ν12 ν2 w 2 , Β 2 2

1

0

ν

1 w 2 −1 w 1− w w

0

ν1 w 1− − w ν2 w



ν1 + ν2 2

d

w w

18 667

913

914

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

To simplify notation in Eq. (18.667), one may hereafter enforce y

w , w

18 668

ψ

ν1 , 2

18 669

ζ −ψ − 1 = 0,

18 670

x



ν1 w, ν2

18 671

ξ

ν1 + ν2 ; 2

18 672

and

combination of Eqs. (18.669) and (18.670) generates, in turn, ζ=1+ψ =1+

ν1 2

18 673

Moreover, ν1 +1 Γ 1 1 2 = = ν1 ν1 Β ψ, ζ − ψ Β Γ 1 ,1 Γ 2 2

18 674

can be obtained with the aid of Eqs. (18.505), (18.669), and (18.670); in view of Eq. (12.400) and (12.412), one obtains ν1 ν1 ν1 Γ 1 ν1 18 675 = 2 ν1 2 = 2 = Β ψ, ζ − ψ 1 2 0 Γ 2 from Eq. (18.674), via cancellation of Γ{ν1/2} between numerator and denominator. Therefore, Eq. (18.667) may be redone as ν1

ν

ν1 2 1 Β ψ, ζ − ψ w2 1 ν2 PF W < w = ν1 ν2 Β ψ, ζ −ψ , Β 2 2 ν1 ν1 2

=

1

yψ − 1 1 − y

ξ−ψ −1

1− xy

−ξ

dy

0

ν1

w 2 ν1 + ν2 ν1 ν1 ν1 , ,1+ ; − w 2 H1 ν2 2 2 2 ν2 ν1 ν1 ν2 Β , 2 2 2 18 676

upon multiplication and division by Β{ψ, ζ – ψ), and at the expense of Eqs. (18.665), (18.668)–(18.672), and (18.675) – where division of both numerator and denominator by ν1/2 finally retrieves Eq. (18.607), i.e. the relationship of the cumulative probability function of the F-distribution to the hypergeometric function.

Continuous Probability Functions

By the same token, one may redo Eq. (18.539) to t

1 Pt T < t = + 2

1 1 1 ν ν2 Β , 2 2

2 −

t 1+ ν

0

t2 ν

=

1 + 2

1 1 ν , 2 2

1 ν2 Β

1 ν2

1+

2

ν+1 2

ν 2tdt 2t ν

ν+1 2 − 2 t

18 677

2

d

ν

t ν

1 2 2

t ν

0

1

via multiplication and division by

2t ν2 1 1 , and splitting of ν as ν2 ν2 followed by moving of off ν 2 between numerator and denominator, coupled with

the kernel. Cancellation of ν1/2 2 t2 t definition of the ratio of to as new (dummy) variable of integration allows further ν ν transformation of Eq. (18.677) to 2 t 1 2 ν ν+1 t d 2 − 2 2 ν t t 2 ν ν 1 1 t Pt T < t = + 1+ 2 18 678 1, 1 ν 2 νt 2 2Β , t 2 2 2 1 ν t2 2 ν 0 t2 ν ν 2

t2 t by (since t t plays the role of constant with ν ν regard to t); the upper limit of integration was accordingly replaced by unity. To simplify notation, one may define (auxiliary) variable η as along with multiplication and division of

2

η

t ν, t2 ν

18 679

with Eq. (18.678) consequently turning to 1

1 Pt T < t = + 2

=

1 + 2

1 1 ν 2Β , 2 2 1 t2 2 1 2ν2 Β

1 ν , 2 2

0

t2 η 1− − ν



ν+1 2

1 −−

1

η 2 1− − 0

t2 η ν

t2 ν −

1 2 dη 1

η2 18 680

ν+1 2



915

916

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

via cancellation of

1 t2 2 ν

between numerator an denominator, while t 2

1 2

and ν were

taken out of the kernel; elementary algebraic reformulation converts Eq. (18.680) to 1

1 Pt T < t = + 2

t 1 2ν2 Β

1 ν , 2 2

1 η2 − 1

1−η

0

0

t2 η 1− − ν

1+ν − 2

dη,

18 681

1 1 namely via splitting of exponent − as −1 and multiplication of kernel by (1 − η)0 = 1. 2 2 Comparative inspection of the integral in Eq. (18.681) with the one in Eq. (18.665) indicates that y

η,

18 682

1 , 2

18 683

ζ −ψ − 1 = 0,

18 684

ψ

x



t2 , ν

18 685

ξ

1+ν ; 2

18 686

and

whereas insertion of Eq. (18.683) permits isolation of ζ as ζ=1+ψ =1+

1 3 = 2 2

18 687

from Eq. (18.684). Furthermore, the reciprocal of Β{ψ, ζ − ψ} will read 1 3 1 + − 1 2 2 2 = 1 3 1 Β ψ, ζ − ψ Γ Γ − 2 2 2 Γ

Γ =

3 2

1 Γ Γ 1 2

,

18 688

at the expense of Eqs. (18.505), (18.683), and (18.687); the fundamental property of the gamma function conveyed by Eq. (12.412), complemented by its relationship to the factorial function as per Eq. (12.400) permit further transformation to 1 1 1 Γ 1 1 2 2 =2= = 1 1 2 Β ψ, ζ − ψ 0 Γ 2

18 689

Continuous Probability Functions

1 was meanwhile dropped from both numerator and denominator. 2 Therefore, Eq. (18.681) may be reformulated as

– where Γ

Pt T < t =

1 Β ψ, ζ − ψ t 1 + 1 2 1 ν Β ψ, ζ −ψ 2ν2 Β , 2 2

1 t 1 1 = + 1 2 2 1 ν 2ν2 Β , 2 2 =

1 + 2

1 ν2 Β

t 1 ν , 2 2

2H 1

2H 1

1

ηψ − 1 1 − η

ζ − ψ −1

1 −xη

−ξ



0

1 + ν 1 3 t2 , , ;− 2 2 2 ν

,

1 + ν 1 3 t2 , , ;− 2 2 2 ν 18 690

following multiplication and division of the last term by 1/Β{ψ, ζ − ψ} as per Eq. (18.689), and the aid of Eqs. (18.682), (16.683), and (16.685)–(16.687); hence, Eq. (18.540) – entailing the relationship of the cumulative probability function of the t-distribution to the hypergeometric function, is retrieved, again at the expense of Eq. (18.665).

917

919

19 Statistical Hypothesis Testing

A statistical hypothesis is a hypothesis that is testable, on the basis of observing a process modeled by a set of random variables that follow a probability distribution known a priori. Hypothesis testing (or confirmatory data analysis) is a method of statistical inference that resorts to tests of significance to determine the probability that a statement is true, and at what likelihood such a statement may be accepted as true. Usually, two datasets are compared, or data obtained from sampling is compared against synthetic data produced via an idealized model; a working hypothesis is then put forward for the statistical relationship between the data, versus a complementary hypothesis. Therefore, the basic process of hypothesis testing consists of four sequential steps: (i) formulate the null hypothesis, H0 (commonly stated as if the observations are the result of pure chance), and the alternative hypothesis, H1 (commonly stated as if the observations unfold an actual effect, along with an unavoidable component of variation by chance); (ii) identify an appropriate test statistic that can be used to assess the truth of the null hypothesis (dependent on the nature of the data and of the test); (iii) compute the P-value, or associated probability that a test statistic at least as significant as the one determined from the sample data would be obtained, assuming that the null hypothesis held true (the smaller said probability, the stronger the evidence against the null hypothesis); and (iv) compare the P-value with an acceptable significance level, α (or threshold of probability, often 1% or 5%) – if P ≤ α, then the observed effect is statistically significant, so the null hypothesis is ruled out, and the alternative hypothesis is concomitantly accepted as valid. The probability of rejecting the null hypothesis (H0, expressed as =) is a typical function of five factors – whether the test is two-tailed (i.e. H1 is expressed as ) or one-tailed (i.e. H1 is expressed as either < or >), the value of α, the intrinsic variance of the data, the amount of deviation from H0, and the size of the sample. This rationale is illustrated in Fig. 19.1. Decision on whether or not to accept the null hypothesis is based on the actual value taken by the test statistic – the distribution of which is specified on the assumption that H0 holds true, and typically follows one of the continuous distributions discussed so far; if said value is very unlikely, then H0 should be rejected and H1 concomitantly accepted. In order to take this decision on a quantitative basis, a probability value α must be chosen a priori – below which H0 becomes sufficiently unlikely to be (safely) rejected; the portion(s) of the area below the probability density function curve accounting for α will be continuous whenever a unilateral test is at stake, or else splitted (equally) in half in the case of a bilateral test – with which is which being determined by the nature of H1 (i.e. < or >, versus , respectively). Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

α/2



0 S

α

0

1 α/2

S

∞ α/2

∞ –∞

0 S

Accept S > 0

Accept S = 0

∞ α

(f) Accept S < 1

Probability density function

Accept S = 1

Accept S ≠ 1

Accept S ≠ 1

(e)

Probability density function

Accept S < 0

–∞

α/2

(d)

Accept S = 0

0

Accept S = 1

1 α

S

∞ 0

Accept S = 1

1

S

Accept S > 1

0 S

(c)

Probability density function

–∞

Probability density function

Accept S = 0

Accept S ≠ 0

(b)

Probability density function Accept S ≠ 0

(a)

Probability density function

920

∞ α

Figure 19.1 Graphical representation of probability density function of (a–c) symmetrically and (d–f ) ) of asymmetrically distributed test statistic, S, describing the original population (——), and range ( actual values for S describing a representative sample taken at random from said population – for which the null hypothesis, H0 ( ), stating that (a–c) S = 0 or (d–f ) S = 1 would be accepted with a probability not below 1 − α, versus the alternative hypothesis, H1 ( ), stating that (a) S 0, (b) S < 0, (c) S > 0, (d) S 1, (e) S < 1, or (f ) S > 1.

The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by identification of two types of conceptual errors – i.e. type I and type II, and by specifying parametric limits – e.g. on how much type I error is allowable; this is illustrated in Table 19.1. The (maximum) magnitude of a type I error is often represented by α, whereas that associated with a type II error is represented by β. The size of a statistical test is the probability of incorrectly rejecting the null hypothesis – or false positive rate; α represents the upper bound imposed a priori on said size, i.e. the maximum exposure to erroneously rejecting H0 – whereas the probability of the test correctly rejecting H0 is termed β, with 1 − β referring to power of test. Said types of errors are inversely related to each other; as α increases, β decreases – and vice versa. The type I error rate is usually set in advance;

18.3 Statistical Hypothesis Testing

Table 19.1 Description of types of statistical errors and (given) critical value, and relationship thereof with underlying null hypothesis, H0, and alternative hypothesis, H1, in the case of unilateral testing – with indication of associated probability ( ).

Right decision

Wrong decision (type II error)

Probability β Right decision

Wrong decision (type I error)

H0

H1

α Statistic Probability α

Critical value

Probability density function(s)

Accept H1

β

Statistic

Probability 1 − α

Probability density function(s)

Statistic

Critical value

1–α

H1

H0

Critical value

H1

H0

H1 1–β

Statistic

Critical value

H0

Probability density function(s)

H1 is true

Probability density function(s)

Accept H0

H0 is true

Probability 1 − β

the type II error rate for a given test is much harder to ascertain, as it requires a priori estimation of the distribution of H1 (which is usually unknown). Decision on whether a given estimate of a model parameter is acceptable from a statistical point of view may be based on either a test of complementary hypotheses or, equivalently, the corresponding inference intervals. Consider, in this regard, H0 β = β0 H1 β

β0

19 1

– as statement of the null hypothesis, H0, that the true value of parameter β in the said model is equal to a given value β0, against the alternative hypothesis, H1, that the true

921

922

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

value of β is different from β0; H0, as per Eq. (19.1), will be rejected at significance level α if, and only if, the bilateral (1 − α)-inference interval of β, given by α σ, 19 2 β = b ± Scrt 2 excludes β0 – where σ 2 denotes true variance (which may be replaced by an estimate thereof, s2), Scrt α2 denotes critical value of suitable statistic S taken at half the aforementioned significance level, and b denotes estimated value for parameter under scrutiny. This may be mathematically expressed as β

α β0 < b−Scrt 2 σ

α β0 > b + Scrt 2 σ

= β0

19 3

α

in the case of a bilateral test using a symmetrically distributed statistic – i.e. there is a probability, not exceeding α, that β = β0 is valid, knowing that β0 < b – Scrt{α/2}σ or β0 > b + Scrt{α/2}σ. This type of reasoning, in the case of an unilateral test, imposes the inference interval be compatible with the nature of H1, i.e. β

β0 < b− Scrt α σ

= β0

α

,

19 4

corresponding to H0 β = β 0

19 5

H1 β > β 0

– i.e. there is a probability, not exceeding α, that β = β0 is valid, knowing that β0 < b + Scrt{α}σ; or else β

β0 > b + Scrt α σ

= β0

α

,

19 6

associated with H0 β = β 0

19 7

H1 β < β 0

– meaning that there is a probability, not exceeding α, that β = β0 is valid, knowing that β0 > b + Scrt{α}σ . The above reasoning and notation are justified whenever the test statistic at stake is symmetrically distributed – as happens with a normal or a t-distribution, relevant for estimation of the inference interval of parameter β for which an estimate b is available (to be discussed below); otherwise, an asymmetric inference interval will result in the case of a bilateral test. The latter sitation, in particular, may be phrased as y

y − Scrt , 1 1 −

α α σ, y + Scrt, 2 σ 2 2

19 8 1− α

α α α α Scrt, 2 in general, where Scrt , 1 1 − and Scrt, 2 denote the 2 2 2 2 (lower and upper, respectively) critical values for the appropriate statistic; this happens namely with an F-distribution, relevant to establish the inference band associated with prediction y as per a given model encompassing y (as also developed below to further detail). Equation (19.8) implies with − Scrt , 1 1 −

18.3 Statistical Hypothesis Testing

y

α α y −Scrt , 1 1− 2 σ , y + Scrt , 2 2 σ

y0

= y0

1− α

,

19 9

consistent with H0 y = y0 H1 y

y0

19 10

as underlying statistical hypotheses – where y0 denotes the given value of y to be tested; in other words, there is a probability, not below 1 – α, that y = y0 is valid when y0 lies between y – Scrt,1{1 – α/2}σ and y + Scrt,2{α/2}σ. Some of the most useful (and currently used) statistical tests are summarized in Table 19.2 – with indication of major features, underlying test statistic, and reference distribution. In the case of location tests – involving typically a test for the mean, μ, being equal to a given value, μ0, the test statistic looks like Eq. (18.253) when the sample, with variance s2, used to estimate the population variance, σ 2, is large enough; this is so because large N implies that s essentialy coincides with σ. Therefore, a (standard) normal distribution of the type conveyed by Eq. (18.258) will be in order. When the sample is small, s2 deviates considerably from σ 2; if the original sample data, W, are normally distributed, then their square, or a sum thereof used to estimate variance, V ≡ W2, will follow a χ 2-distribution, in agreement with Eqs. (18.469) or (18.474), respectively. The associated T variable, as per Eq. (18.478), suggests a t-distribution for the test statistic, in agreement with Eq. (18.495). A dispersion test resorts, in essence, to an estimate of variance obtained from the sample – taken from a normally distributed population, and thus abiding to Eq. (18.469); therefore, a χ 2-statistic is utilized as reference when σ 2 = σ 20 consubstantiates H0, with s2 estimated as s2/(N – 1). The null hypothesis σ 2A = σ 2B , applying when two populations are compared with each other, has classically been replaced by (the equivalent statement) σ 2A σ 2B = 1, so two estimates of variance, obtained from as many samples – each one drawn from one population, will be available. Hence, the ratio of the squares of normally distributed variables in the first place, see Eq. (18.550) pertaining to U with νA = NA – 1 degrees of freedom, and Eq. (18.551) encompassing V with νB = NB – 1 degrees of freedom, as well as Eq. (18.553) encompassing W, is to be selected as test statistic – which follows an F-distribution, as per Eq. (18.571). Although the source population(s) are frequently hypothesized to follow a normal distribution – thus confirming the importance of such a statistical behavior, sufficiently large samples permit the said constraint be circumvented; this is a consequence of the central limit theorem, as explored before. On the other hand, the number of degrees of freedom of any reference χ 2-, t-, or F-statistic matches the number of experimental points utilized to estimate variance – subtracted of the number of averages used for said calculation (i.e. 1 for each population at stake). The estimator x of the mean (or expected value), μ, of a given population based on a single sample constituted by N elements is given by N

xi x

i=1

N

≈ μ,

19 11

as a result of Eq. (18.277); the associated estimator s2 of variance of the population, σ 2, reads

923

Table 19.2 Most common statistical tests, and associated features in terms of nature, testing hypotheses, and reference distribution, as well as requirements in terms of population(s) and sample(s). Population(s)

Sample(s)

Nature

Number

Distribution

Parameter(s)

Number

Size

Dispersion (variance)

1

Normal

σ 20

1

Any N

Parameter(s)

Null hypothesis (H0)

s2

σ 2 = σ 20

N 2

Normal

σ 2A , σ 2B

s2A , s2B

2

NA, NB Location (mean)

1

Any

μ0

1

Large N

x,s2

σ 2A =1 σ 2A μ = μ0

N Normal

Small N

x,s2

Test statistic

Reference distribution

N −1 s2 σ 20

χ 2{N − 1}

s2A s2B

F{NA − 1,NB − 1}

x −μ0 s N

N{0,1} t{N − 1}

N 2

Any

μ A , μB

2

Large N

xA , xB , s2A ,s2B NA, NB

2

Normal

Small N

xA , xB , s2A ,s2B NA, NB

a

Assuming estimated variances are close to each other.

μA = μ B

xA − xB − μA − μB 1 1 s + NA NB

N{0,1} t{NA + NB − 2}a

18.3 Statistical Hypothesis Testing N

xi − x

2

i=1

s2

≈ σ2

N −1

19 12

– even though the variance of the population of sample averages looks like σ 2/N as per Eq. (18.278); one degree of freedom is taken off in the denominator of Eq. (19.12), on account of using the average x in the calculation, obtained in turn from all sample data as per Eq. (19.11). When two populations, A and B, are at stake, a pooled estimate of sample variance is required, according to NA + NB − 2 s2 = NA − 1 s2A + NB −1 s2B

19 13

where (NA + NB − 2)s represents the overall sum of squares of residuals – splitted as 2

NA − 1 s2A =

NA

xi − xA

2

i=1

pertaining to the first sample and NB − 1 s2B =

NB

xi − xB

2

i=1

pertaining to the second sample, both in agreement with Eq. (19.12); after dividing both sides by NA + NB − 2, and taking square roots afterwards, Eq. (19.13) becomes s=

NA −1 NB − 1 s2A + s2 , NA + NB −2 NA + NB − 2 B

19 14

where (NA − 1)/(NA + NB − 2) and (NB − 1)/(NA + NB − 2) have been used as weight factors for variances s2A and s2B , respectively. Note that 1/NA + 1/NB under the square root in Table 9.2 places a heavier importance on the smaller sample, as NA < NB (say) implies 1/NA > 1/NB s = s2 as template to state that 1 2 1 2 1 1 NA s + NB s = s NA + NB ; whereas N N xA −xB will be taken for estimate of μA − μB, which reflects Eq. (19.11) – and a standard normal distribution will serve as reference distribution. When sample sizes are small (or at least one of them is small, usually NA, NB < 30), Student’s t-distribution is to be used as reference statistic – with a number of degrees of freedom, ν, accordingly given by – after using

ν = NA + NB −2

19 15

s2A

s2B

if is not too disparate from (as per the last entry in Table 19.2); the two degrees of freedom explicitly removed arise from calculation of the two means of the corresponding populations. If said variances are quite dissimilar, then 1 = ν

2

s2A NA −1 s2A

+

s2B

1 + NA − 1

2

s2B NB −1 s2A

+

s2B

1 NB −1

19 16

NA −1 NB − 1 NA − 1 NB −1 should instead be employed to estimate the lumped number of degrees of freedom – where the reciprocal of the overall number of degrees of freedom, 1/ν, appears as a weighed average of the reciprocals of the numbers of degrees of freedom of the two samples, i.e. 1/νA = 1/(NA − 1) and 1/νB = 1/(NB − 1), with the squares of s2B NB −1

s2A NA −1

s2A s2B NA − 1 + NB −1

and

, respectively, playing the role of weight factors; in this case, NA + NB − 2 in

s2A s2B NA −1 + NB −1

the last entry of Table 19.2 is to be replaced by the reciprocal of Eq. (19.16). If ν turns out to be noninteger, then the next lower integer should be taken instead, so as to

925

926

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

produce a conservative estimate of ν. For similar variances and similar numbers of degrees of freedom characterizing the samples, Eq. (19.16) tends asymptotically to

lim

s2B

s2A

NB

NA

2

s2A NA − 1

1 = ν

s2A NA − 1

2

1 1 + NA −1 NA −1 s2A s2A s2A s2A + + NA − 1 NA − 1 NA − 1 NA − 1 1 2 1 1 2 1 2 1 = = + = 2 NA − 1 2 NA − 1 4 NA −1 2 NA −1 1 1 = lim = NA + NA −2 NB NA NA + NB −2

19 17

so Eq. (19.15) will be retrieved as expected. If K random variables Xi are at stake – each producing a sample with N elements (xi,1, xi,2, …, xi,N), then the variance of each one will read Var Xi

E Xi − E Xi

2

; i = 1, 2,…, K ,

19 18

following Eq. (18.15). The covariance of every two such variables, say, Xi and Xj, will then look like Cov Xi ,Xj

E

Xi −E Xi

Xj −E Xj

; i = 1,2, …,K ; j = 1, 2,…,K ,

19 19 in parallel to Eq. (18.252). When the said random variables are considered all together as a vector X, viz. X1 X

X2 …

,

19 20

XK the associated covariance matrix will look like

Var X

Var X1

Cov X1 , X2

… Cov X1 ,XK

Cov X2 , X1

Var X2

… Cov X2 ,XK







Cov XK , X1

Cov XK , X2





19 21

Var XK

– with elements given by Eq. (19.18) if located in the main diagonal, or Eq. (19.19) if located off the main diagonal. The covariance matrix is always symmetric, because Cov Xj, Xi

E

Xj −E Xj

Xi − E Xi 19 22

=E

Xi − E Xi

Xj − E Xj

Cov Xi, Xj

following plain application of the definition conveyed by Eq. (4.107), complemented by Eq. (19.19) and commutativity of the product of two scalars. The matrix in Eq. (19.21) reduces to a diagonal matrix when the Xi’s are independently distributed – i.e. Cov{Xi,Xj} = 0 for i j; and further reduces to a scalar matrix when all Xi’s are identically distributed – i.e. Var{Xi} = Var{Xj} for i = j, for every i and j.

927

20 Linear Regression

Consider a linear model that expresses a single dependent variable, say y, as a function of M − 1 independent variables, say, x1, x2, …, xM−1 – according to yi = β0 + β1 x1, i + β2 x2, i +

+ βM −1 xM −1, i + zi ; i = 1,2, …,N;

20 1

here β0 denotes the independent parameter, β1, β2, …, βM−1 denote linear parameters, z denotes deviation of actual y from that estimated via the model (containing parameters β0, β1, …, βM−1), and subscript i refers to ith experiment. Equation (20.1) is obviously equivalent to writing y1 = β0 + β1 x1, 1 + β2 x2, 1 +

+ βM −1 xM −1, 1 + z1

y2 = β0 + β1 x1, 2 + β2 x2, 2 +

+ βM −1 xM −1, 2 + z2

… yN = β0 + β1 x1, N + β2 x2, N +

20 2

+ βM −1 xM −1, N + zN

for the whole set of experimental data; algebraic manipulation hereafter will obviously become quite tedious, especially when M and N are large – so a more concise approach is in order, namely, matrix-based manipulation. Toward this goal, one should accordingly depart from y = Xβ + z

20 3

as matrix analogue of Eq. (20.2) – provided that the (N × M) matrix of independent data, X, abides to 1 x1, 1 … xM − 1, 1 1 x1, 2 … xM − 1, 2 X , 20 4 … … … … 1 x1, N … xM − 1, N and the (N × 1) vector of dependent data, y, looks like

y

y1 y2 ; … yN

Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

20 5

928

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

moreover, the (M × 1) vector of parameters, β, satisfies β0 β1

β

,



20 6

βM − 1 and the (N × 1) vector of residuals, z, is given by z1 z2

z

20 7

… zN

– which will hereafter be assumed to follow a normal distribution. Note that each xj,i may ∂y be viewed as xj, i = (i = 1, 2, …, N; j = 1, 2 ,…, M − 1), complemented by x0,i = 1 – so ∂βj y = y i

Eq. (20.4) may be rewritten as X=

∂y ∂β

V;

20 8

the form of Eq. (20.8) serves as bridge to nonlinear regression analysis, by permitting the equations obtained via linear regression encompassing X be used as template – with X to ∂y . be replaced by (Jacobian matrix) V, for any (nonlinear) model where X ∂β

20.1

Parameter Fitting

Calculation of a set of estimates b, viz. b0 b1

b

,



20 9

bM − 1 for parameters β as per Eq. (20.6) in the model conveyed by Eq. (20.3), may resort to the (scalar) sum of squares of residuals, S, according to N

N

S

zi i=1

yi −

2 i=1

M−1

2

βj xj, i

20 10

j=0

– with the aid of Eq. (20.2); recalling the rules of multiplication of matrices, one realizes that Eq. (20.10) may be reformulated as S = z Tz,

20 11

Linear Regression

at the expense of Eq. (20.7). Scalar S may accordingly be viewed as a (1 × 1) matrix – where zT denotes transpose of z, viz. z T = z1 z2

zN ,

20 12

consistent with Eqs. (4.47), (4.105), and (20.6). After rewriting Eq. (20.3) as z = y− Xβ

20 13

upon isolation of z, Eq. (20.11) may be alternatively coined as S = y− Xβ

T

y− Xβ ;

20 14

the algebra of matrices has it that the transpose of a sum of matrices equals the sum of the transposes of its terms as per Eq. (4.114), so Eq. (20.14) turns to S = y T − Xβ

T

y− Xβ

20 15

– whereas the transpose of a product of matrices being equal to the product of transposes of the factors, in reverse order, allows further transformation to S = y T − β T X T y −Xβ ,

20 16

in agreement with Eq. (4.120). The distributive property of the product of matrices then justifies reformulation of Eq. (20.16) to S = y T y− Xβ − β T X T y− Xβ = y T y −y T Xβ − β T X T y + β T X T Xβ ,

20 17

after having been applied twice; the aforementioned rule of transposition also supports y T Xβ

T

= βT X T yT

T

= β T X T y,

20 18

together with realization that the transpose of the transpose of a matrix retrieves the original matrix, see Eq. (4.110). Therefore, Eq. (20.17) becomes S = y T y −y T Xβ − y T Xβ

T

+ β T X T Xβ

20 19

upon combination with Eq. (20.18). Since Xβ is given by y − z as per Eq. (20.3), one may express yTXβ as y T Xβ = y T y −z = y T y− y T z,

20 20

along with elimination of parenthesis; transposal of both sides of Eq. (20.20), using the rules referred to above, unfolds y T Xβ

T

= y T y− y T z

T

= yT y

T

− yT z

T

= yT yT

T

− z T yT

T

= y T y −z T y 20 21

929

930

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

On the other hand, Eqs. (20.5) and (20.12) may be recovered to write y1 z T y = z1 z2 … zN

= z1 y1 + z2 y2 +

= y1 y2 … yN

y2 … yN + zN yN = y1 z1 + y2 z2 + z1 z2 …

+ yN zN

20 22

= yT z

zN – at the expense of the algorithm of multiplication of matrices, coupled with the commutative property of multiplication of scalars and the aid again of Eq. (20.5); insertion of Eq. (20.22) transforms Eq. (20.21) to y T Xβ

T

= y T y− y T z,

20 23

so yTy − yTz may be eliminated between Eqs. (20.20) and (20.23) to get y T Xβ = y T Xβ

T

20 24

In view of the symmetry of matrix yTXβ – made explicit by Eq. (20.24) after recalling Eq. (4.107), one can simplify Eq. (20.19) to S = y T y −2y T Xβ + β T X T Xβ;

20 25

this is indeed a more useful form of S, for subsequent estimation of parameters β. As will be fully justified later, determination of the best estimates b for parameters β should resort to minimization of the sum of squares of residuals as per Eq. (20.10) – described by ∂S ∂β

β=b

= 01 × M

20 26

as necessary condition or, equivalently, ∂S ∂β T

= 01 × M T = 0M × 1

20 27

β=b

in its transpose version; here 01×M denotes the row vector of zeros with M columns, while 0M×1 denotes the corresponding column vector of zeros. In view of Eq. (20.25), one obtains ∂S ∂β T

∂ y T y− 2y T Xβ + β T X T Xβ T ∂β β=b ∂ ∂ = T yT y − T 2y T Xβ β = b ∂β ∂β =

β=b

∂ + T β T X T Xβ β = b ∂β

20 28 β=b

Linear Regression

as per the rule of differentiation of a sum of matrices (arising from linearity of the differential and addition operators). Equation (20.28) further turns to ∂S ∂β T

β=b

T

= − 2 yT X = −2X

T

y

T

∂ T y Xβ ∂β

= 0M × 1 −2

β=b T T

β=b

+ 2X

= − 2X T y + 2X T Xβ

X

T T

β

T β=b

T

+ 2β T X T X T

∂ T T β X Xβ ∂β

+

β=b

,

20 29

T T β=b

β=b

since ∂/∂β coincides with (∂/∂β) ; and with the extra help of Eqs. (4.110) and (4.120) pertaining to matrix transposition, and Eq. (10.422) for x = β and A = XTX – besides realization that (experimental) yTy and yTX are obviously independent of (model) parameters βT and β, respectively. Insertion of Eq. (20.29) transforms Eq. (20.27) to T

T

− 2X T y + 2X T Xb = 0M × 1

20 30

– where addition of 2X y to both sides, followed by division of the result by 2 produce T

X T Xb = X T y

20 31

Premultiplication of both sides of Eq. (20.31) by (XTX)−1 gives then rise to XTX

−1

XTX b = XTX

−1

X T y,

20 32

where (XTX)−1 being the inverse of XTX supports transformation to IMb = X T X

−1

XTy

20 33

as per Eq. (4.124) – with IM denoting the (M × M) identity matrix; Eq. (20.33) readily degenerates to b = XTX

−1

X T y,

20 34

in view of Eq. (4.64). If N = M, then X (and thus XT) are themselves square matrices, so their inverses may exist – and one will be able to redo Eq. (20.34) as b

N =M

= X −1 X T

−1

X T y = X −1

XT

−1

X T y = X −1 I M y = X −1 y,

20 35

at the expense of Eqs. (4.56) and (4.147); note that Eq. (20.35) becomes equivalent to Eq. (7.51) after isolation of b. One would in this case be led to interpolation – i.e. the model at stake would pass exactly through all experimental points, with b coinciding with β and so without degrees of freedom to account for z. If N > M, then Eq. (20.34) must be used as only XTX is a square matrix (and thus susceptible of inversion); in this case, N – M degrees of freedom will be available to estimate σ 2, and eventually permit construction of inference intervals for the parameters – as part of a regular exercise of linear regression analysis. Conversely, M – N parameters will appear as linear combination of N (arbitrated) parameters when N < M.

931

932

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

20.2

Residual Characterization

Once the vector of estimates b for parameter vector β is available as per Eq. (20.34), one may calculate the predicted independent variables y1 , y2 , …, yN , i.e. y1 y2 … yN

y

y −z

β

20 36

which calls for a visit to the model conveyed by Eq. (20.3) as y = Xb;

20 37

insertion of Eq. (20.34) transforms Eq. (20.37) to y = X XTX

−1

XTy

20 38

Equation (20.38) often appears as y = Hy,

20 39

with the (N x N) hat matrix, H obviously defined by H

X XTX

−1

XT;

20 40

such a designation results from its materializing a linear (matrix) transformation of y to y, see Eq. (20.39), or a simple disguise that accounts for the said overhead hat, ^. The residuals, z, associated with the b-based (estimated) model – not to be confounded with the (unknown) residuals, z as per Eq. (20.13), associated to the β-based (exact) model, are defined as z

y− y ;

20 41

b

Eq. (20.41) may be redone at the expense of Eq. (20.39) to z = y− Hy = I N − H y,

20 42

where y was factored out (for algebraic convenience) with the aid of Eq. (4.76). On the other hand, HT = X XTX

−1

= X XT XT

XT

T −1

T

= XT

T

XTX

XT = X XTX

−1

−1 T

XT = X

XT = H

XTX

T −1

XT

20 43

results directly from Eq. (20.40), combined with the rules of transposal of a product of matrices, and intrinsic exchangeability of transposal and inversion of matrices, as per Eqs. (4.120) and (4.165), respectively, further to redundancy of transposition when applied twice sequentially; hence, the hat matrix is symmetric as it coincides with its transpose, in agreement with Eq. (4.107). Furthermore, I N −H

T

= I NT − H T

20 44

in agreement with the rule of transposition of a sum of matrices labeled as Eq. (4.114) – where combination with Eq. (20.43) allows simplification to I N −H

T

= I N − H,

20 45

Linear Regression

coupled with realization that the identity matrix is intrinsically symmetric, i.e. INT = IN; Eq. (20.45) indicates that IN − H is also a symmetric matrix. A third interesting property can be ascertained after squaring the hat matrix as given by Eq. (20.40), i.e. H2 = X X T X = X XTX =X

−1

−1

XTX

X XTX

XT

XT

XTX

−1

XT = X XTX

−1

XTX

−1

−1

−1

XTX XTX

−1

= X IM X T X

XT

XT

20 46

,

XT = H

at the expense of the associative property and existence of neutral element of multiplication of matrices, complemented with the definition of inverse matrix – see Eqs. (4.56), (4.61), and (4.124), respectively; therefore, H is idempotent – as it coincides with its square, thus also implying Hn = H for n > 2, following iterated application of Eq. (20.46). Finally, it should be emphasized that I N −H

2

= I N − H I N − H = I N I N − I N H −H I N −H −H = I N − H −H + H H = I N − 2H + H 2

20 47

– in view of the distributive property of multiplication of matrices, and the fact that preor post-multiplication of a matrix by the identity matrix leaves the former unchanged – where the idempotency of H conveyed by Eq. (20.46) may be called upon to support further simplification to IN − H

2

= I N −2H + H = I N − H;

20 48

therefore, the idempotency of H extends to IN − H. Based again on Eq. (20.42), one may apply the variance operator to both sides as Var z = Var I N − H y ;

20 49

since y is a column vector and IN − H preceding it is the matrix coefficient of a linear transformation of y itself, one may resort to Eq. (18.235) to write Var z = I N − H Var y I N − H

T

20 50

– on the hypothesis that y is normally distributed. A similar application of operator Var to Eq. (20.3), i.e. Var y = Var Xβ + z ,

20 51

may resort to Eqs. (18.235) and (20.51) to produce Var y = 12 Var Xβ + 2 1 1Cov Xβ, z + 12 Var z = Var Xβ + 2Cov Xβ, z + Var z

,

20 52

owing to the unit coefficients of both Xβ and z, and using Eq. (18.251) as template; note, for instance, that Var Xβ = Var 1 Xβ = Var 1 Xβ = 1 Var Xβ + 1 T = 1 Var Xβ 1 = 1 Var Xβ 1 = 12 Var Xβ The variance of Xβ is nil, because true values do not vary – whereas the assumed independent and identical normal distributions of experimental data makes z be uncorrelated with X (or Xβ for that matter), i.e. Cov{Xβ,z} = 0; consequently, Eq. (20.52) reduces to Var y = Var z ,

20 53

933

934

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

thus confirming that y must be normally distributed as hypothesized above – knowing that z was assumed as normally distributed from the very beginning. The aforementioned homoscedasticity of experimental errors implies similar variances for all of them, and thus a scalar variance matrix, i.e.

Var z =

σ2 0

0

0 σ

0

0

2

0

= σ2 I N

20 54

σ2

stemming from Eq. (19.21) – where their mutual independence accounts for the zeros off the diagonal; combination of Eqs. (20.53) and (20.54) consequently yields Var y = σ 2 I N

20 55

Insertion of Eq. (20.55) finally permits transformation of Eq. (20.50) to Var z = I N − H σ 2 I N I N −H

T

,

20 56

where straightforward algebraic manipulation unfolds Var z = σ 2 I N − H I N I N −H

T

= σ2 I N − H I N − H

T

,

20 57

as allowed by Eqs. (4.24) and (4.61); furthermore, Eq. (20.45) supports transformation of Eq. (20.57) to Var z = σ 2 I N − H I N − H = σ 2 I N − H 2 ,

20 58

whereas Eq. (20.48) permits extra simplification to Var z = σ 2 I N − H

20 59

Validity of Eq. (20.59) permits estimation of the variance of each particular observation, Var zi – corresponding to a particular set of values for the independent variables, relative to that predicted by the (linear) model as Var zi = σ 2 1 −hi, i ;

20 60

here hi,i denotes the ith element (1 ≤ i ≤ N) of the diagonal of (square) matrix H, while 1 accounts for its identity matrix counterpart. Equation (20.60) is useful to decide whether a certain residual is excessively large, given the overall variability of the dataset – so as to gauge outliers that would otherwise likely disrupt goodness of fit, or even compromise significance of regression at all; standardization would thus be desirable, leading to a nil mean and a unit standard deviation – as done when defining the standardized normal distribution via Eq. (18.253). Since the mean of residuals is nil and the standard deviation of each one reads σ 1 − hi, i as per Eqs. (18.16) and (20.60), one should to advantage define standardized residuals, zσ, i , according to zσ, i

σ

zi 1 −hi, i

20 61

Linear Regression

Oftentimes the true variance of the population, σ 2, is not known, so one must resort to an estimate thereof using sample variance, s2, as per Eq. (19.12) – in which case Eq. (20.61) should be replaced by zs, i

s

zi ; 1 −hi, i

20 62

the zs, i ’s are known as studentized residuals – because Student’s t distribution, rather than a regular normal distribution of residuals would be at stake, as per Eqs. (18.478) and (18.495). A quantitative measure of how distant a given result is from its expected value becomes therefore accessible through Eq. (20.62), as predicted via the associated linear model – given the variability of the original population sampled, and set of coordinates consubstantiated by the independent variables. For increased robustness of the conclusion on wheter the ith point is an outlier, the variance should be estimated after taking said potential outlier off the sample; this is particularly important for relatively small sample sizes, or when the suspicious datum already appears excessively off the trend of the accompanying data.

20.3 Parameter Inference 20.3.1

Multivariate Models

According to the proof conveyed by Eq. (20.53), vector y is normally distributed – with mean μ = Xβ,

E y

20 63

consistent with the originally hypothesized model, i.e. Eq. (20.3); as well as variance Var y

Σ = σ2 I N

20 64

as per Eq. (20.55). The issue now arises as to the distribution – and corresponding descriptors, of the vector of parameter estimates b. In view of Eq. (20.34), b may be viewed as the result Y of a linear transformation A + BX of vector X via A and B as independent and linear (matrix) coefficients, respectively, using Eq. (18.216) as template; in the present case, A should be set equal to 0M×1, B should be set equal to (XTX)−1XT, and X should be set equal to y. The analogues of Eqs. (18.234) and (18.235) will accordingly read μb

E b =E = XTX

−1

XTX

−1

X T y = 0M × 1 + X T X

−1

XTE y

20 65

XTμ

and Σb

Var b = Var = XTX

−1

XTΣ

XTX

−1

XTX

−1

XTy = XTX

XT

T

−1

X T Var y

XTX

−1

XT

T

, 20 66

935

936

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

respectively – with the help also of Eqs. (20.63) and (20.64). Further insertion of Eq. (20.63) permits transformation of Eq. (20.65) to μb = X T X

−1

X T Xβ = X T X

−1

X T X β = I M β,

20 67

after applying the associative property of multiplication of matrices; whereas Eq. (20.66) turns to Σb = X T X

−1

= σ2 X T X

X T σ2 I N −1

XTX

XTX

XTX

−1

XT

T −1

T

= σ2 X T X

−1

= σ2 I M X T X T

X T IN

XT

T

XTX

−1 T

T −1

20 68 after extra consideration of Eq. (20.64). To generate Eq. (20.68), the commutativity of multiplication of scalar by matrix, associativity of multiplication of matrices, with identity matrix serving as neutral factor thereof, and definition of inverse matrix – as well as the rules of transposition and inversion of a product of two matrices, coupled with their composition were all taken on board; Eqs. (20.67) and (20.68) may finally be simplified to μb = β

20 69

and Σb = σ 2 X T X

−1

,

20 70

respectively – because the transpose of an already transposed matrix, and multiplication by the identity matrix leave the original one unchanged. Remember that a normally distributed variable x, with mean μ and variance σ 2, may be transformed to z as per Eq. (18.253) – which, in turn, implies zz

z2

x−μ 2 x− μ = σ σ2

2

= x− μ

1 x −μ , σ2

20 71

since the product of scalars is both associative and commutative; z follows a standard normal distribution in agreement with Eq. (18.258), where z2 appears as argument of an exponential function. As seen above, Eq. (20.34) assures that b follows a normal distribution, because it results from y via a linear transformation consubstantiated in (XTX)−1XT as multiplicative operator – provided that y is itself normally distributed, in agreement with Eq. (20.53) and the postulate that z is normally distributed. Such a normal distribution of b is described by mean μb as per Eq. (20.69) and variance Σb as per Eq. (20.70), so one may proceed similarly and define a normalized variable z such that z

zbT zb

b − μb

T

Σb−1 b − μb =

b− β T Σb− 1 b −β

20 72

– using Eq. (20.71) as template after taking square roots of both sides, further to the aid of Eq. (20.69). Transposition was utilized here to assure compatibility between matrices with regard to multiplication thereof – after recalling that the transpose of a scalar coincides with itself; while inversion was employed as (matrix) equivalent to (scalar) reciprocal. Variable z is thus supposed to be (standard) normally distributed; combination of Eq. (20.72) with Eq. (20.70) then produces

Linear Regression

z=

b −β

T

σ2 X T X

−1 −1

b−β =

b−β

T

XTX b −β σ2

N Z; 0, 1 , 20 73

in view of composition of matrix inversion with itself, and again coincidence of inverse matrix with reciprocal scalar in the case of a trivial (1 × 1) matrix. It should be emphasized that the true σ 2 characterizing the original population is typically unknown, so it is to be estimated via s2 obtained as variance of a sample of said population; under these circumstances, Eq. (20.73) ought to be replaced by z 2 = b −β

T

XTX b −β , s2

20 74

where s2 is usually accessible via N

yi −yi s2 =

2

i=1

20 75

N −M

en lieu of Eq. (19.12) – with division of the overall sum of squares of residuals of experimental data, yi, relative to model predictions, yi , by the actual number of degrees of freedom left, i.e. difference between number of points, N, of the data set and number of parameters, M, in the said model. Since yi −yi , or equivalently zi as per Eq. (20.41) follows a normal distribution – for being obtained from normally distributed y (by hypothesis) via linear transformation IN − H in agreement with Eq. (20.42), its square should follow a χ 2-distribution with 1 degree of freedom as assured by Eq. (18.469). Therefore, the sum extended to all N experimental data produces a χ 2-distribution with N degrees of freedom as per Eq. (18.474), corrected to N − M because of the M parameters in the model utilized to calculate yi in the first place; in other words, s2 as conveyed by Eq. (20.75) is described by χ2 N − M

s2

20 76

In parallel to Eq. (18.495), the ratio of the normal distribution of z to the square root of the χ 2-distribution of the sum of squares of residuals with N − M degrees of freedom, divided by the said N − M to unfold s2 , is due to follow a t-distribution with N − M degrees of freedom, viz. z 2

s

b − β T X T X b −β s2

t N − M;

α , 2

20 77

consistent with Eq. (20.74). In view of Eqs. (18.633) and (18.635)–(18.637), one concludes that b −β T X T X b− β s2

t 2 N − M;

α , 2

20 78

or else b −β T X T X b− β = s2 M F M, N − M; α

20 79

937

938

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

following multiplication of both sides by s2 – with 1 in Eq. (18.637) replaced by M as first number of degrees of freedom of an F-distribution. For reasons that will become apparent later when discussing multivariate analysis as related to joint inference, Eq. (20.79) is the preferred form for the joint inference interval characterizing all model parameters considered simultaneously. Furthermore, values are readily available for Fisher’s F-distribution, F{M, N – M; α} – see Table 18.4, whereas plain values for a Student’s t-distributed statistic, t{N – M; α/2}, are normally available – see Table 18.3, rather than their squares. If only the elements in the main diagonal of the covariance matrix were utilized, then marginal inference intervals for the parameters would be at stake; this is equivalent to setting b−β TXTX

i, i

b −β

b0 −β0

XTX

T

b1 −β1

0

bM − 1 −βM − 1

0

0

1, 1

XTX

2, 2

0

b0 −β0

0

b1 −β1

XTX

0

,

bM −1 −βM − 1

M, M

20 80 with XTX reduced to a diagonal matrix containing just the original diagonal elements, XTX

i, i

, and the remaining terms deliberately replaced by zeros. Performance of the

matrix multiplications as indicated in Eq. (20.80) unfolds

T

b−β X X T

i, i

b−β =

T

b0 − β0 X T X

1, 1

b0 − β 0

b1 − β 1 X T X

2, 2

b1 − β 1 ,

bM −1 − βM −1 X X

bM −1 − βM − 1

T

= b0 − β 0 X T X

1, 1

M, M

b0 −β0 + b1 − β1 X T X

+ bM −1 −βM − 1 X T X

M, M

2, 2

bM − 1 −βM − 1

b1 −β1 + 20 81

which may be condensed to b−β TXTX

i, i

b −β = b0 − β0

2

XTX

1, 1

+ bM −1 − βM −1

2

+ b1 − β 1

XTX

M, M

M

bi−1 −βi− 1

= i=1

2

XTX

i, i

2

XTX

2, 2

+ ;

20 82

Linear Regression

when each term of the above summation is considered separately, one gets bi−1 − βi−1 s

2

XTX

i, i

2

= t 2 N − M;

α 2

20 83

as suggested by Eq. (20.78) – where multiplying both sides by

s2 X X

and taking square

T

i, i

roots afterward lead to bi−1 − βi−1 = s

1 X X

t N − M;

T

i, i

α 2

20 84

Isolation of bi in Eq. (20.84) finally yields bi−1 = βi− 1 ± s

XTX

−1 i, i

t N − M;

α ; i = 1, 2,…,M 2

20 85

−1

with X T X i, i denoting reciprocal of ith diagonal element of (matrix) XTX – which describes the marginal inference interval for each parameter. Student’s t-distribution appears explicitly in this case, rather than its square as per Eq. (20.78) – as a square root could be taken of a single scalar – while it would be impossible to do so if a matrix were at stake; this became possible because analysis focused on each parameter considered independently of the others. 20.3.2

Univariate Models

If a single independent variable is under scrutiny (as often happens), the associated linear model encompasses just two parameters, i.e. β0 and β1 – so Eqs. (20.79) and (20.85), as descriptors of joint and marginal inference intervals for the parameters, respectively, are to be revisited with M = 2. The functional simplicity and widespread utilization of this linear model prompts, however, supplementary discussion – namely, covering the situation where replicates are available for (at least) one of the experimental data points, say, (xi, yi,1), (xi, yi,2), …, (xi, yi,N). Remember that the yi,j’s follow some form of statistical distribution; hence, their average, yi , will hold the true mean, μi, as expected value, in agreement with Eq. (18.277) – applicable to a normally distributed population, as postulated here. One may accordingly define the average sum of squares of deviations, s2, of the yi,j’s (with j = 1, 2, …, N) relative to said yi , according to N

yi, j − yi s2

2

j=1

20 86 , N −1 which is analogous to Eq. (19.12); s2 may indeed be termed sample variance. As a consequence of random variable Y being normally distributed, one concludes that random variable W, defined as N

yi, j − μi

W j=1

2

χ2 W ; N ,

20 87

939

940

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

must follow a chi-square distribution with N degrees of freedom; this conclusion stems directly from Eq. (18.474), after remembering that μi is a true constant instead of an (error-bearing) experimental measurement. On the other hand, the square in Eq. (20.87) may be redone as N

yi, j −μi

2

N

yi, j − yi + yi − μi

=

j=1

j=1 N

, 2

yi, j −yi

=

2

+ 2 yi, j − yi yi − μi

yi −μi

20 88

2

j=1

following addition and subtraction of yi , complemented by expansion as per Newton’s binomial theorem; once the summation is split three ways, Eq. (20.88) becomes N

yi, j − μi

2

N

yi, j − yi

=

i=1

2

N

j=1

j=1

N

yi, j − yi

=

N

2 yi, j −yi yi −μi +

+

2

yi − μi

2

j=1 N

N

+ 2 yi − μi

yi, j −yi

j=1

j=1

N

1 + yi − μi

2

j=1

1 i=1

N

yi, j

N

=

yi, j − yi

2

yi, j − yi

2

+ 2N yi −μi

j=1

j=1

N

=

+ N yi − μi

N

−yi

+ yi −μi 2 N

2

j=1

20 89 – since yi , as well as yi −μi and its square are constants relative to summation variable j, N

yi j

while jN1 − yi is nil in view of the definition of average. Equation (20.89) may be rewritten as N

yi, j −μi

2

= N −1 s2 + N yi − μi

2

U +V

20 90

j=1

with the aid of Eq. (20.86), or simply W =U +V for consistency with Eq. (20.87) – provided that U s2

N −1 s2 ,

20 91 20 92

coupled with V yi

N yi −μi

2

χ 2 V ;1 ;

20 93

in fact, Y i itself (or Y i − μi , for that matter) is normally distributed as per Eq. (18.280), so its square (multiplied by N, for that matter) is χ 2-distributed as per Eq. (18.469). Note that U{s2} and V{yi } are independent from each other – as a result of the independence of s2 relative to yi , so the joint moment-generating function, GW{p}, reduces to GW p = G U p G V p

20 94

Linear Regression

in agreement with Eq. (18.204); this is equivalent to stating GW p , GV p

GU p =

20 95

upon isolation of GU{p} – where p represents a dummy variable. In view of Eqs. (20.87) and (20.93), one may retrieve the individual moment-generating functions using Eq. (18.429) as template, i.e. N

1− 2p − 2

GU p =

1 − 2p

1 −2

= 1 − 2p

−1−N 2 2

= 1 − 2p −

N −1 2 ;

20 96

the right-hand side looks like the moment-generating function of a χ 2-distributed random variable, with N − 1 degrees of freedom – so one can safely conclude that U

N −1 s2 ; N −1 ,

χ2

20 97

with the aid of Eq. (20.92). This conclusion supports the N – 1 degrees of freedom used for the χ 2-statistic in the first row of Table 19.2. Consider finally a fourth random variable Z, given by N

yi −y

Z

2

χ 2 Z; 1 ,

20 98

i=1

which denotes total sum of squares of deviations of the (estimated) yi ’s relative to the grand average of Y, i.e. the average of y1, y2, …, yi, …, yN obtained at x1, x2, …, xi, …, xN, respectively – which, in turn, prompts definition of x as average of the latter; Z holds obviously a single degree of freedom – arising from y, and in parallel with Eq. (20.93) after replacing N, yi and μi by 1, yi and y, respectively. Recalling Eq. (20.1), or Eqs. (20.36) and (20.37) for that matter, and denoting as b0 and b1 the estimates of β0 and β1, respectively, one may write N

b0 + b1 x i − y 2 ;

Z=

20 99

i=1

the average of both x and y belonging to the line that best fits the data (i.e. y = b0 + b1 x, to be proven in due course) supports transformation of Eq. (20.99) to N

2

N

y − b1 x + b1 xi − y =

Z= i=1

i=1

b 1 x i − b1 x 2 = b 1 2

N

xi −x

2

20 100

i=1

– along with cancellation of symmetrical terms, and factoring out of constant factor b12 afterward. On the other hand, one may resort to s2

s 2 b1 =

N

xi − x

20 101 2

i=1

as analogue to a basic statistical expression (to be derived later) – applicable to the common situation where (population) variance σ 2 is unknown, and must thus be estimated via (sample) variance s2; Eq. (20.101) allows reformulation of Eq. (20.100) to Z = b1 2

s2

s2 b1 2 2 = 2 s b1 s b1

20 102

941

942

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Once in possession of random variable U described by Eq. (20.97) and random variable Z satisfying Eq. (20.98), one may define a new random variable Ω as Z Z = 1 U U N − 1 N −1

Ω

F Ω; 1, N − 1

20 103

– where division of each such variable by the corresponding number of degrees of freedom took place, so as to permit straightforward application of Eq. (18.571). Insertion of Eqs. (20.92) and (20.102) supports transformation of Ω, as given by Eq. (20.103), to b1 2 s 2 b1 Ω= s2

s2 =

b1 2 b1 = 2 s b1 s b1

2

=

Ω

2

t2

Ω; N −1

20 104

Remember that B1 is a normally distributed random variable as per Eq. (20.34) – since Y −1 is normally distributed itself, and X T X X T materializes a linear operator; hence, B12 follows a χ 2-distribution with one degree of freedom, in view of Eq. (18.469) – while s{b1} should be given by s

−1 2, 2

XTX

as per Eq. (20.85), with σ replaced by s obtained from

Eq. (20.86), if marginal inference is at stake. Elimination of Ω between Eqs. (20.103) and (20.104) finally yields F Ω; 1, N − 1

Ω; N − 1 ,

t2

20 105

thus confirming the basic equivalence between the statistical distribution of the square of a t-distributed random variable and an F-distribution, pertaining to a single parameter of the linear regression; besides agreeing with Eq. (18.633), one may resort to Eq. (20.105) to obtain Eq. (20.78) from Eq. (20.79), after replacing 1 by M in both 1F{1, N – 1} and t2{N – 1} – which applies to general (rather than marginal) parameter inference.

20.4

Unbiased Estimation

20.4.1

Multivariate Models

The estimation technique derived above for the parameters in linear models was based on minimization of the sum of squares of residuals of the (experimental) data relative to the (linear) model, see Eqs. (20.10) and (20.26) – eventually leading to b = Wy,

20 106

with W defined as W = wi, j = X T X

−1

X T ; i = 1, 2,…, M; j = 1,2, …, N

20 107

and thus fully consistent with Eq. (20.34); inspection of Eqs. (20.106) and (20.107) accordingly unfolds N

wi, j yj ; i = 1,2, …, M

bi = j=1

20 108

Linear Regression

as per the algorithm of multiplication of matrices – with wi,j independent of (non-observable) βj, since X ≡ X{x1, x2, …, xN} in Eq. (20.107). Every bi is accordingly expressed as a linear combination of all yi’s in view of Eq. (20.108) – thus justifying the linear designation for the model regression at stake. According to German mathematician Johann C. F. Gauss (nineteenth century) and Russian mathematician Andrei A. Markov (nineteenth–twentieth centuries), b as given by Eqs. (20.106) and (20.107) is the best linear unbiased estimator of β – in that it conveys the lowest variance for the parameter estimates, when compared with other unbiased, linear putative estimators. Therefore, it justifies use of minimization of squares of residuals in Eq. (20.26), in the first place. To prove the aforementioned Gauss and Markov’s theorem, one should retrieve the basic assumptions underlying linear regression, i.e. E z = 0N × 1 ,

20 109

or nil expectation for the errors, which reflects the appropriateness of the model as per Eq. (20.3); Var z = σ 2 I N

20 110

as per Eq. (20.54), or homoscedastic distribution with distinct error terms uncorrelated with each other, described by finite variance, Var{zi} = σ 2 < ∞ (with i = 1, 2, …, N), and nil covariance, Cov{zi,zj} = 0 (with i, j = 1, 2, …, N and i j); and E β = β,

20 111

or unbiasedness of estimator β of b – regardless of the values taken by X. The best linear unbiased estimator (or BLUE, for short) of β is, by definition, the unbiased one with smallest mean squared error. To find β, one may hypothesize β = Cy

20 112

at large, although resorting to Eq. (20.106) as template; the matrix consubstantiating such a linear transformation may, in general, be coined as C

W + D = XTX

−1

X T + D,

20 113

in agreement with Eq. (20.107) – where D would denote a nonzero matrix. The expected value of β will consequently read XTX

E β =E Cy =E W +D y =E =E =

XTX XTX

−1

−1

X T + D Xβ +

X T + D Xβ +

XTX

XTX

−1

−1 −1

X T + D Xβ + z

XT + D z

,

20 114

XT + D E z

stemming from Eqs. (20.3), (20.112), and (20.113), and at the expense of the distributive property of multiplication of matrices, coupled with Eq. (18.234) pertaining to the expected value of a linear transformation of a normal distribution z – with A ≡ ((XTX)−1XT + D)Xβ and B ≡ (XTX)−1XT + D; Eq. (20.109) prompts simplification of Eq. (20.114) to E β =

XTX

−1

X T + D Xβ =

XTX

−1

X T X + DX β = I M + DX β, 20 115

943

944

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

after factoring X in and lumping (XTX)−1 with its inverse XTX to produce IM. Therefore, β will be unbiased if (and only if ) DX = 0M × M ,

20 116

since this implies E β

DX = 0

= I M β + 0M × M β = β + 0M × 1 = β

20 117

that is consistent with Eq. (20.111). With regard to variance, one notices that Var β = Var Cy = C Var y C T = Cσ 2 I N C T = σ 2 CI N C T = σ 2 CC T

20 118

departing from Eq. (18.235) with A ≡ 0 and B ≡ C, besides Eqs. (20.55) and (20.112) – complemented with the properties of the identity matrix as neutral element of multiplication of matrices and commutativity of multiplication of scalar by matrix; insertion of Eq. (20.113) gives Var β = σ 2

XTX

−1

XT + D

XTX

−1

= σ2

XTX

−1

XT + D

XT

= σ2

XTX

−1

XT + D

X

= σ2

XTX

−1

XT + D

X XT XT

= σ2

XTX

−1

XT + D

X XTX

T

XT + D

T

XTX

−1 T

T −1

XTX

−1

+ DT

+ DT

T −1

,

20 119

+ DT

+ DT

with the aid of the rules of calculation of transpose of sum and product of matrices, coupled with exchangeability of transpose and inverse operators. Application of the distributive and associative properties of multiplication of matrices, the rule of transposition of product of matrices, the definition of inverse matrix, and the property of multiplication of matrix by identity matrix allows transformation of Eq. (20.119) to Var β = σ 2

XTX

−1

= σ2 I M X T X = σ2

XTX

−1

XTX XTX −1

−1

+ DX X T X

+ DX X T X

−1

+ DX X T X

−1

+ XTX

+ XTX

−1

−1

−1

+ XTX

DX

DX

T

T

−1

X T D T + DD T

+ DD T

+ DD T

20 120 Insertion of Eq. (20.116) brings about dramatic simplification of Eq. (20.120) to Var β = σ 2

XTX

−1

+ 0M × M X T X

−1

= σ2

XTX

−1

+ DD T = σ 2 X T X

+ XTX −1

−1

0M × M T + DD T

+ σ 2 DD T

,

20 121

Linear Regression

along with the distributive property of multiplication of scalar by matrix, and realization that multiplication of a matrix by a nil matrix turns the former to another nil matrix; in view of Eqs. (20.66) and (20.70), one finds that Var β = Var b + σ 2 DD T

20 122

Recall the result conveyed by Eqs. (4.212) and (4.214), which indicates that DDT is a symmetric, positive semidefinite matrix – after setting P equal to DT; hence, Eq. (20.122) implies that Var β exceeds Var{b} – so b is indeed the estimator of β characterized by the minimum variance. The variance of each parameter may now be retrieved from Eq. (20.122) via Var βi = Var bi + σ 2 DD T

i, i

0 … = Var bi + σ 2 0 … 1 … 0 DD T

20 123

1 … 0

– where all elements of the row vector (or the corresponding column vector) are nil, except the one at the ith position; since DDT is positive semidefinite, the (scalar) 0 … DD T 0 … 1 … 0 DD T 1 must necessarily be nonnegative, namely after i, i … 0 T setting a = [0 … 1 … 0] and V = DDT in Eq. (4.196). A similar realization holds when such a scalar is multiplied by (inherently) positive σ 2, thus guaranteeing that Var βi ≥ Var bi ; i = 0, 1,…, M − 1

20 124

as per Eq. (20.123); does indeed apply; in other words, bi is the estimator of βi characterized by the minimum (parameter) variance. 20.4.2

Univariate Models

If a single independent variable, rather than several such variables is of interest, then simple linear regression will arise; one may thus state b0 = y −b1 x

20 125

regarding estimate b0 of parameter β0, complemented by N

xi − x yi − y b1 =

i=1

20 126

N

xj −x j=1

2

945

946

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

pertaining to estimate b1 of parameter β1 (both to be proven below); here x reads N

xi i=1

x

20 127

,

N

and thus represents the value of x such that the sum of its distances to all xi’s is nil – and y is likewise given by N

yi i=1

y

20 128

N

After realizing that N

N

xi − x y i − y = i=1

N

yi xi −x − i=1 N

y xi − x i=1 N

yi xi −x −y

= i=1

20 129

N

xi −x = i=1

y i xi − x i=1

– due to the distributive property of multiplication of scalars, the constancy of y with regard to counting variable i, and the fact that Eq. (20.127) implies N i = 1 xi − x = 0 (as stressed before), one concludes that b1 as per Eq. (20.126) may be rewritten as N

y1

yi xi − x b1 =

i=1 N

N

xj − x

N

2

N

xi − x

=

xj − x

i=1

j=1

ki x yi = k1 k2 … kN

yi = 2

j 1

y2 …

;

yN

j=1

20 130 the underlying definition of ki accordingly reads xi −x ; i = 1, 2, …, N ki x N xj −x

20 131

2

j 1

where x stands as shorthand symbol for all x1, x2, …, xN. It is interesting that Eq. (20.131) supports N N

N

ki = i=1

xi − x

xi − x N

xj −x

i=1 j=1

= 2

i=1 N

= 0, xj −x

20 132

2

j=1

obtained after straightforward algebraic manipulation – and again at the expense of the definition of x as per Eq. (20.127). One also realizes that

Linear Regression N N

N

xi xi − x

ki x i = i=1

=

N

xj − x

i=1

N

xi xi −x 2

i=1 N

=

N

2

xj − x

j=1

xi −x xi − x + x xi − x i=1 2

xj − x

j=1

j=1

, N

N

N

xi −x xi −x =

i=1

x xi −x i=1 N

+

N

xj −x

N

2

2

xj −x

j=1

xi − x =

i=1 N

xi −x

x +

xj −x

j=1

2

2

i=1 N

=1+0=1 xj −x

j=1

2

j=1

20 133 stemming from Eq. (20 .131), upon addition and subtraction of x xi − x to the numerator – followed by splitting of summation therein, factoring out of x, and retrieving again of the nil value of N i = 1 xi − x ; as well as 2 N

N

ki 2 = i=1

xi −x N

xj − x

i=1

N N

xi −x

= 2

2

N

i=1

xj −x

j=1

xi −x

2

2

i=1

=

2

N

2

xj − x

j=1

=

1 N

xj −x

2

2

j=1

j=1

20 134 2

dropped off both numerator departing once more from Eq. (20.131) – where N j = 1 xj − x and denominator, while i and j denote counting variables independent of each other. Since y1, y2, …, yN have been hypothesized from the very beginning as independent, normally distributed (random) variables with similar variance – with mean E{yi} = μi as per Eq. (20.63) and variance Var{yi} = σ 2 as per Eq. (20.64), then b1 abiding to Eq. (20.130) will also be independently, identically normally distributed for being a linear combination of yi’s – with mean y1 E b1 = E 0 + k1 k2 … kN

y1

y2 …

= 0 + k1 k2 … k N E

yN

μ2 … μN

… yN 20 135

μ1 = k1 k2 … kN

y2

N

N

ki μ i =

= i 1

ki E yi i 1

947

948

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

and variance y1 y2

Var b1 = Var 0 + k1 k2 … kN

k1 k2 … kN Var

… yN

σ 0 … 0 2

= k 1 k2 … kN

N

0 σ2 … 0

N

ki σ 2 ki =

= i=1

… … … …

0 0 … σ2

k1 k2 … kN

y1

k1

y2

k2





yN

kN

= k 1 σ 2 k2 σ 2 … kN σ 2

k1 k2 … kN

N

ki 2 σ 2 = i=1

ki 2 Var yi i=1

20 136

in general agreement with Eqs. (18.234) and (18.235), respectively, with A = 01 × 1, B = [k1 k2 … kN], and N = [y1 y2 … yN]T. Equation (20.135) may be further worked out as N

N

ki β0 + β1 xi =

E b1 = i=1

N

ki β 0 + i=1

N

ki β 1 x i = β 0 i=1

N

ki + β 1 i=1

ki x i ,

20 137

i=1

as per Eqs. (20.4), (20.6), and (20.63), coupled with splitting of summation and factoring out of constants – which simplifies to E b1 = β 0 0 + β 1 1 = β 1 ,

20 138

in view of Eqs. (20.132) and (20.133); therefore, b1 is an unbiased estimator of β1 – since it coincides with that parameter, irrespective of the value held by the other parameter, β0, or (xi,yi) for that matter. By the same token, σ 2 may be factored out in Eq. (20.136) to get N

Var b1 = σ 2

ki 2 ,

20 139

i=1

where combination with Eq. (20.134) unfolds σ 2 b1

Var b1 =

σ2

20 140

N

xi − x

2

i=1

– thus providing an unequivocal proof of Eq. (20.101); since σ 2 is not frequently known, one must indeed resort to (estimate) s2, obtained from a sample of the population. Consider now that all unbiased linear estimators β1 of β1 share the form N

β1 =

ci yi i=1

20 141

Linear Regression

that resembles Eq. (20.112), where ci may take any real value; since Eq. (20.141) mimics Eq. (20.130) in functional form, one may jump to an analogue of Eq. (20.135), and then to an analogue of (20.137), viz. N

N

ci E yi = β0

E β1 = i=1

N

ci + β 1 i=1

ci xi ,

20 142

i=1

where ki was just replaced by ci, and b1 by β1 . By hypothesis, β1 is to be an unbiased estimator of β1, i.e. E β1 = β1 ,

20 143

as per Eq. (20.111); hence, Eq. (20.143) enforces N

ci = 0

20 144

ci xi = 1,

20 145

i=1

and N i=1

so as to ensure compatibility with Eq. (20.142). With regard to variance of β1 , one should similarly get N

N

ci 2 Var yi = σ 2

Var β1 = i=1

ci 2

20 146

i=1

using Eq. (20.136), and then Eq. (20.139) as template after replacement of ki by ci. If a deviation ei of ci from ki (taken as reference) is considered, i.e. ei

c i − ki ,

20 147

then Eq. (20.146) may be redone as N

Var β1 = σ 2

ki + ei

2

20 148

i=1

– where Newton’s expansion of the square of the binomial gives, in turn, rise to N

Var β1 = σ 2

N

ki 2 + 2ki ei + ei 2 = σ 2 i=1

N

i=1

N

ki e i + σ 2

ki 2 + 2σ 2 i=1

ei2 , i=1

20 149 followed by splitting of the summation; combination with Eqs. (20.134) and (20.140) simplifies Eq. (20.149) to

949

950

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Var β1 =

N

σ2

N

N

xi − x

N

ki ei + σ 2

+ 2σ 2 i=1

2

i=1

N

ki e i + σ 2

ei2 = Var b1 + 2σ 2 i=1

ei2 i=1

i=1

20 150 The first summation left in Eq. (20.150) may be rewritten as N

N

N

i=1

N

ki ci −ki =

ki e i =

ki ci −ki 2 =

i=1

N

ki c i −

i=1

ki 2 ,

i=1

20 151

i=1

with the aid of Eq. (20.147) and straightforward algebraic rearrangement; due to Eqs. (20.131) and (20.134), one may reformulate Eq. (20.151) to N

N

ki e i = i=1

ci

xi − x

,

N

2

xj −x

i=1

1



N

xi − x

j=1

20 152

2

i=1

where elementary algebraic manipulation gives rise to N

N

ci xi − ci x

N

ki e i = i=1

i=1 N

1



xi − x

j=1

2

ci x i=1

xj − x

i=1

N

− 2

j=1

1 N

xi − x i=1

N

x

ci x i =

i=1 N

=

N

2

xj − x

N

ci x i −

i=1 N

xj −x

ci i=1



N

2

xj −x

j=1

− 2

20 153

1 N

xi − x

j=1

2

2

i=1

Insertion of Eqs. (20.144) and (20.145) then allows transformation of Eq. (20.153) to N

ki e i = i=1

1 N

xj −x

x0



N

2

xj − x

j=1

j=1

− 2

1 N

xj −x

=0

20 154

2

j=1

and, consequently, N

Var β1 = Var b1 + σ 2

ei 2

20 155

i=1

upon combination of Eqs. (20.150) and (20.154); differentiation of Eq. (20.155) with regard to every ei, and then setting of the outcome equal to zero support ∂Var β1 ∂ = σ2 ∂ei ∂ei

N

ej 2 = σ 2 j=1

dei 2 = 2σ 2 ei = 0; i = 1,2, …,N dei

20 156

Linear Regression

as necessary condition for an estimator with minimum variance, which implies ei = 0

20 157

Further differentiation of Eq. (20.156) with regard to each ei leads to ∂ 2 Var β1 ∂ei 2

∂ ∂Var β1 ∂ei ∂ei

=

∂ 2σ 2 ei = 2σ 2 > 0, ∂ei

20 158

since a square cannot take negative values; Eq. (20.158) guarantees that Eq. (20.157) represents, in fact, a minimum. Hence, b1 given by Eq. (20.126), or Eqs. (20.130) and (20.131) for that matter, describe not only an unbiased estimator of β1 as per Eq. (20.138), but also the one leading to the lowest variance of said estimate, in view of Eqs. (20.155), (20.157), and (20.158). On the other hand, combination of Eq. (20.125) with Eq. (20.130) has it that N

yi

N

b0 = y −b1 x = y− x

ki yi =

i=1

i=1

N

N

N



1 − ki x yi = N

ki xyi = i=1

i=1

N

ωi x y i i=1

20 159 – written also with the aid of Eq. (20.128), along with factoring x in, collapsing summations, and factoring yi out; consequently, b0 ends up as a linear combination of yi’s via yi-independent parameters ωi, defined as ωi x

1 − ki x x; i = 1,2, …, N, N

20 160

where Eq. (20.131) was duly recalled. One may then obtain N

N

ωi = i=1

i=1

1 −ki x = N

N

1 − N i=1

N

ki x = N i=1

N N 1 −x ki = 1 − x ki N i=1 i=1

20 161

departing from Eq. (20.160), where insertion of Eq. (20.132) unfolds merely N

ωi = 1 − x 0 = 1

20 162

i=1

With regard to the summation of ωixi, one gets N N

N

ωi xi = i=1

i=1

1 −ki x xi = N

N

xi − N i=1

xi

N

ki xxi = i=1

i=1

N

N

−x

N

ki xi = x− x i=1

ki x i i=1

20 163

951

952

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with the aid of Eqs. (20.127) and (20.160) – coupled with the distributive property of multiplication of scalars, as well as factoring out of 1/N and x; insertion of Eq. (20.133) reduces Eq. (20.163) to just N

ωi xi = x−x 1 = 0

20 164

i=1

Finally, one obtains N

2 N 1 −ki x = N i=1

N

ω i2 = i=1

i=1 N

1 N

= i=1

2

2

1 N

1 −2 ki x + ki x N

N



N 2 1 ki x + ki 2 x 2 = N N i=1 i=1

2

2 N

1− i=1

N 2x N ki + x 2 ki 2 N i=1 i=1

20 165 departing again from Eq. (20.160), at the expense of Newton’s binomial theorem – where 1 2 2x the initial summation was meanwhile splitted, and , , and x2 were factored out N N afterward; Eqs. (20.132) and (20.134) permit simplification of Eq. (20.165) to N

ωi2 = i=1

N 2x − 0 + x2 N2 N

1

=

N

xi − x i=1

2

1 + N

x2

20 166

N

xi −x

2

i=1

Analogy between Eqs. (20.130) and (20.159) permits Eq. (20.137) be revisited as N

E b0 = β 0

N

ωi + β 1 i=1

ωi xi = β0 1 + β1 0 = β0 ,

20 167

i=1

upon exchange of ki for ωi, and of b1 for b0 – where Eqs. (20.162) and (20.164) were meanwhile used to advantage; b0 as per Eq. (20.125) is thus an unbiased estimator of β0. By the same token, setting N

β0 =

di yi ,

20 168

i=1

in parallel to Eq. (20.141), leads to N

E β 0 = β0

N

di + β 1 i=1

di xi

20 169

i=1

that resorted to Eq. (20.142) as template, following exchange of ci and di; hence, β0 will not play the role of unbiased estimator of β0 unless E β 0 = β0 ,

20 170

Linear Regression

similarly to Eq. (20.143) pertaining to β1 versus β1 – thus enforcing N

di = 1

20 171

di xi = 0,

20 172

i=1

and N i=1

for compatibility between Eqs. (20.169) and (20.170), which serve as counterparts of Eqs. (20.144) and (20.145), respectively, when β0 is now considered. The variance of β0 should read N

Var β0 = σ 2

di 2 ,

20 173

i=1

after replacing β1 by β0 , and ci by di in Eq. (20.146) – so definition of fi as fi

di −ωi ,

20 174

in parallel to Eq. (20.147), leads to N

N

ωi2 + 2σ 2

Var β0 = σ 2 i=1

N

ωi fi + σ 2 i=1

fi 2 ,

20 175

i=1

similar to Eq. (20.149), as long as ki and ei are replaced by wi and fi, respectively; insertion of Eq. (20.166) gives then rise to

Var β0 = σ 2

1 + N

N

x2 N

xi − x

2

N

ω i fi + σ 2

+ 2σ 2 i=1

fi 2

20 176

i=1

i=1

After realizing that the analogue of Eq. (20.139) should take the form N

Var b0 = σ 2

ω i2

20 177

i=1

once ki and ωi are swapped, one immediately attains

Var b0 = σ 2

1 + N

x2

20 178

N

xi −x i=1

2

953

954

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in view of Eq. (20.166); hence, Eq. (20.176) may be reformulated to N

N

ωi fi + σ 2

Var β0 = Var b0 + 2σ 2 i=1

fi 2 ,

20 179

i=1

after taking Eq. (20.178) on board. Note, at this stage, that Eq. (20.151) will look like N

N

ωi fi = i=1

N

ωi di − i=1

ωi 2

20 180

i=1

as soon as ki, ei, and ci are replaced by ωi, fi, and di, respectively; the first summation in the right-hand side may be rewritten as N

N

i=1

N

1 − ki x di = N

ωi di = i=1

di − N i=1

N

ki xdi = i=1

1 N

N

N

di − x i=1

20 181

ki d i i=1

as per Eq. (20.160), complemented by splitting of summation and factoring out of constants afterward. Owing to Eqs. (20.131) and (20.171), one may rewrite Eq. (20.181) as N

N

1 ωi di = 1− x N i=1

xi − x N

xj − x

i=1

N

1 di = −x N 2

N

xj − x

i=1

j=1

−x 2

j=1

N

di N

xj − x

i=1

2

j=1

N

di xi

1 = −x N

N

di xi

di

i=1

i=1

2

+x

N

xj − x

2

N

xj −x

j=1

2

j=1

20 182 – where the distributive property was applied, the outer summation splitted, and x factored in; recalling Eqs. (20.171) and (20.172), one can simplify Eq. (20.182) to N

ωi di = i=1

1 −x 0 + x2 N

1

=

N

xj − x

2

1 + N

j=1

x2

20 183

N

xj − x

2

j=1

Insertion of Eqs. (20.166) and (20.183) transforms Eq. (20.180) to

N

ωi fi = i=1

1 + N

x2 N

xj −x j=1

− 2

1 + N

x2 N

xj − x j=1

= 0, 2

20 184

Linear Regression

which is a result similar to that labeled as Eq. (20.154); therefore, Eq. (20.179) degenerates to N

Var β0 = Var b0 + σ 2

fi 2

20 185

i=1

that is identical, in functional form, to Eq. (20.155) – meaning that fi = 0,

20 186

en lieu of Eq. (20.157), will lead to a critical value of Var β0 , and Eq. (20.186) will appear as the underlying necessary condition for a minimum. The sufficient condition associated to such a critical value will then ensue from ∂ 2 Var β0 ∂fi

=

2 fi = 0

N ∂2 2 fj 2 2 Var b0 + σ ∂fi j=1

= fi = 0

∂ 2σ 2 fi ∂fi

= 2σ 2 fi = 0

fi = 0

= 2σ 2 > 0, 20 187

based on Eq. (20.185) and following the steps sketched in Eq. (20.158); the positive value of the second derivative just found unfolds a concave curve, and thus confirms the suspected minimum – so b0 is indeed a BLUE for β0. One will finally address the issue of actual calculation of the best estimates for the parameters in a linear model; as proven before, the necessary and sufficient condition for their BLUE’s is conveyed by Eqs. (20.2), (20.10), and (20.26), which reduces to ∂ N yi −yi 2 = 0; j = 0, 1 ∂βj i = 1

20 188

when a single independent variable, xi, and only two parameters, β0 and β1, are under scrutiny – with ŷi denoting estimate of the dependent variable obtained via said model for the given xi. Application of the classical rules of differentiation to a sum, a power, and a composite function transforms Eq. (20.188) to N

2 yi − yi − 1 i 1

∂yi ∂βj

0

20 189

or, after dividing both sides by –2 and recalling the form of the underlying linear model, N

yi − β0 + β1 xi i=1

∂ β + β xi = 0; ∂βj 0 1

20 190

the linearity of the differential operator supports conversion of Eq. (20.190) to N

yi −β0 − β1 xi = 0

20 191

i=1

in the case of ∂/∂β0 (i.e. for j = 0), and similarly to N

xi yi − β0 −β1 xi = 0 i=1

20 192

955

956

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

in the case of ∂/∂β1 (i.e. for j = 1). Splitting of the summation in Eq. (20.191) produces N

N

N

yi − i=1

β0 − i=1

β1 xi = 0,

20 193

i=1

where constancy of β0 and β1 turns Eq. (20.193) to N

N

N

yi = β0 i=1

1 + β1 i=1

20 194

xi i=1 N

– along with isolation of

yi ; Eq. (20.194) may then look like i=1

N

N

β0 N + β 1

xi = i=1

20 195

yi i=1

By the same token, Eq. (20.192) may be reformulated to N

N

N

xi yi −

β 0 xi −

i=1

β1 xi 2 = 0

i=1

20 196

i=1

in view of the distributive property of multiplication of scalars, or else N

N

N

β0 xi +

xi yi = i=1

i=1

β1 xi 2

20 197

i=1

upon isolation of the summation containing yi in its terms; Eq. (20.197) becomes N

N

β0

xi + β 1 i=1

N

xi 2 = i=1

20 198

xi yi i=1

after factoring β0 and β1 out – for being independent of i as counting variable in the corresponding summations. Equations (20.195) and (20.198) may, to advantage, be rewritten in matrix form as N

N

N

xi i=1

N

N

xi i=1

xi

2

β0 β1

i=1

yi =

i=1

,

N

20 199

xi y i i=1 N

with β0 and β1 serving as unknowns – concomitantly with N, i=1 N i=1

xi 2 serving as i=1

N

yi and

coefficients thereof, and

N

xi , and

xi yi serving as independent terms; application of i=1

Cramer’s rule is now in order, according to

Linear Regression N

N

yi

xi

i=1

i=1

N

N

xi 2

xi y i b0

β0

β0

∂S ∂β = 0

=

i=1

i=1 N

20 200

xi

N i=1 N

N

xi 2

xi i=1

i=1

that unfolds the BLUE b0 of parameter β0, while N

N

yi i=1

N

N

xi b1

β1

β1

∂S ∂β = 0

=

xi yi

i=1

i=1 N

20 201 xi

N i=1 N

N

xi 2

xi i=1

i=1

gives rise to the BLUE b1 of parameter β1 – both in agreement with Gauss & Markov’s theorem (as discussed previously), and consistent with Eq. (20.26). In view of the definition of second-order determinants, Eqs. (20.200) and (20.201) may appear as b0 =

Sxxy Sxx

20 202

b1 =

Sxy , Sxx

20 203

and

respectively – provided that auxiliary variables are defined as N

N

i=1

N

N

yi −

xi 2

Sxxy

i=1

xi yi ,

xi i=1

20 204

i=1

as well as N

Sxy

N

N

xi yi −

N i=1

xi

yi

i=1

i=1

N

2

20 205

and N

Sxx

xi − 2

N i=1

xi i=1

20 206

957

958

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

After factoring out N2, Eq. (20.206) becomes N

xi Sxx = N

i=1

2

2

N 2

N

xi −

i=1

,

N

20 207

which – in view of Eq. (20.127), may instead appear as N

xi 2 i=1

Sxx = N 2

N

− x2 ;

20 208

one may now add x2 also subtract xx

x2 inside the parenthesis to get

N

xi 2 i=1

Sxx = N 2

N

− xx + x2 − x2

20 209

− 2xx + x2

20 210

or, equivalently, N

xi 2 i=1

Sxx = N 2

N

Upon insertion of Eq. (20.127), once more, one obtains N

N

xi 2 Sxx = N

i=1

2

N

N

1

xi − 2x

i=1

N

+x

2 i=1

N

20 211

from Eq. (20.210), where advantage was also taken from N

1 = N;

20 212

i=1

the summation sign may now be taken out as N

Sxx = N 2 i=1

xi 2 xi 1 − 2x + x2 , N N N

20 213

Linear Regression

thus supporting a more condensed notation. Once N is factored in, Eq. (20.213) becomes N

xi 2 − 2xxi + x2

Sxx = N

20 214

i=1

– where application of Newton’s binomial theorem justifies further condensation to N

xi −x

Sxx = N

2

20 215

i=1

On the other hand, Eq. (20.205) may be redone as N

N

N

xi yi −

Sxy = N i=1

i=1

N

N

xi −

yi i=1

i=1 N

upon addition and subtraction of

N

yi +

xi i=1

N

xi i=1

20 216

yi i=1

N

i=1

N

yi , and swapping of factors

xi i=1

N

yi –

xi and i=1

i=1

where N may be factored out, and the last term further multiplied and divided by N to get N

N

yi

N

xi y i −

Sxy = N i=1

i=1

N

xi −

N

N

xi

i=1

i=1

N

N

xi

N

yi + i=1

yi

i 1

i 1

N

N

N

20 217

Recalling Eqs. (20.127), (20.128) and (20.212), one may redo Eq. (20.217) to N

N

xi yi − y

Sxy = N i=1

N

xi −x i=1

N

yi + xy i=1

1

20 218

i=1

or, upon application of the distributive property of multiplication of scalars reversewise, N

xi yi − yxi −xyi + xy

Sxy = N

20 219

i=1

since x and y are i-independent – where the content of the parenthesis may be condensed to N

xi − x yi − y

Sxy = N i=1

20 220

959

960

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

One may finally revisit Eq. (20.204) as N

N

yi Sxxy = N

xi

N

i=1

xi − N 2

N

N

i=1

xi yi ,

N

i=1

20 221

i=1

after multiplication and division of both terms by N – where definition of x and y as per Eqs. (20.127) and (20.128), respectively, permits notation be simplified to N

N

xi 2 − N x

Sxxy = N y i=1

xi yi ;

20 222

i=1

N may once again be factored out for convenience, according to N

N

xi 2 −x

Sxxy = N y i=1

20 223

xi yi i=1

Addition and subtraction of 2Nx2 y and Nx2 y transforms Eq. (20.223) to N

N

xi 2 −2N x2 y + N x2 y −x

Sxxy = N y i=1

xi yi + 2N x2 y − N x2 y ,

20 224

i=1

whereas splitting of 2N x2 y as N x2 y + N xxy, and expressing 2Nx2 y as 2N xxy unfold N

N

xi 2 −2N xxy + N x2 y − x

Sxxy = N y i=1

xi yi + N xxy + N x2 y − N x2 y ; i=1

20 225 in view again of Eqs. (20.127) and (20.128), one gets N

xi

N

xi − 2N xy 2

y

i=1

i=1

Sxxy = N

+ N x2 y

N N

xi

N

−x

,

N

xi yi + N xy i=1

i=1

N

20 226

yi +Nx

2 i=1

N

− N x2 y

where cancellation of N between numerator and denominator (wherever appropriate) gives rise to N

N

i=1

N

xi + N x2 y −x

xi 2 −2xy

Sxxy = N y

i=1

N

i=1

N

yi −N x2 y

xi + x2

xi yi + xy i=1

i=1

20 227

Linear Regression

One may now factor y out in the first three terms, and − x out in the remainder – and then split the parenthesis into two simpler algebraic combinations, according to N

N

xi 2 − 2x

Sxxy = N y i=1

N

N

xi + N x2 − N x i=1

N

xi yi −y i=1

xi −x i=1

yi + N xy , i=1

20 228 while Eq. (20.212) may be recalled to write N

N

xi 2 − 2x

Sxxy = N y i=1

N

N

1 − Nx

xi + x2 i=1

i=1

N

xi yi −y i=1

N

xi −x i=1

N

yi + xy i=1

1 ; i=1

20 229 once the summation sign is moved out of both parentheses, Eq. (20.229) produces N

N

xi 2 − 2xxi + x2 − N x

Sxxy = N y i=1

xi yi − yxi − xyi + xy ,

20 230

i=1

again because x and y play the role as constants with regard to i in the summations – where Newton’s binomial expansion may be taken in reverse to simplify the first summation, and the expansion of xi −x yi − y may be applied also reversewise to condense the second summation as N

N

xi −x 2 − xN

Sxxy = yN i=1

xi −x yi − y

20 231

i=1

Recalling Eqs. (20.215) and (20.220), one will be able to reformulate Eq. (20.231) as Sxxy = ySxx − xSxy ,

20 232

so division of both sides by Sxx, followed by isolation of y unfold y=

Sxxy Sxy + x; Sxx Sxx

20 233

combination with Eqs. (20.202) and (20.203) allows final simplification of Eq. (20.233) to y = b0 + b1 x,

20 234

which degenerates to Eq. (20.125) once solved for b0. Therefore, the best fit line contains the point with coordinates x,y – thus justifying conversion of Eq. (20.99) to Eq. (20.100); in other words, the best fit line passes through the center of gravity of the xi’s and yi’s, which consubstantiate the experimental data. Once in possession of Eqs. (20.215) and (20.220), one may also transform Eq. (20.203) to

961

962

Mathematics for Enzyme Reaction Kinetics and Reactor Performance N

xi − x yi − y

N b1 =

i=1

20 235

N

xi − x

N

2

i=1

– where N dropping from both numerator and denominator will retrieve Eq. (20.126).

20.5

Prediction Inference

To obtain statistically supported estimates of the fitted response described by Eq. (20.38), or Eq. (20.39) for that matter, one may follow a strategy similar to those yielding (marginal) inference intervals for model parameters. In fact, one should again depart from Eqs. (20.63) and (20.64) as mean and variance, respectively, of the original, normally distributed vector, y, of dependent variables; however, y is now to be used en lieu of y, as the result Y of a linear transformation of vector X via A and B as independent and linear coefficients, following Eq. (18.216) – with A ≡ 0N × 1, B ≡ H, and X ≡ y. Under these circumstances, Eqs. (18.234) and (18.235) prompt μy

E y = E Hy = 0N × 1 + HE y = Hμ

20 236

Σy

Var y = Var Hy = H Var y H T = HΣH T ,

20 237

and

respectively – with the aid again of Eqs. (20.63) and (20.64); insertion of Eqs. (20.40) and (20.63) supports further transformation of Eq. (20.236) to μy = X X T X

−1

X T Xβ = X X T X

−1

X T X β = XI M β,

20 238

also at the expense of the associative property of multiplication of matrices and the definition of inverse matrix – which finally becomes μy = Xβ,

20 239

as per the intrinsic property of the identity matrix relative to matrix multiplication. By the same token, one may combine Eqs. (20.40), (20.64), and (20.237) to generate Σy = X X T X

−1

X T σ2 I N

X XTX

= σ2 X X T X

−1

X T IN

XT

= σ2 X X T X

−1

XTX

XTX

= σ 2 XI M

XT XT

T −1

T

−1

XT

XTX T −1

T

−1 T

XT 20 240

XT

XT

with the aid of the rules of transposition of a product of matrices and a product of a matrix by its inverse, as well as exchangeability of transpose and inverse signs,

Linear Regression

commutativity of multiplication of scalar by matrix, and associativity of multiplication of matrices; Eq. (20.240) eventually yields Σy = σ 2 X X T X

−1

XT

20 241

Furthermore, b serving as BLUE for β permits Eq. (20.239) be rewritten as μy = Xb,

20 242

in view of Eq. (20.111). For each experimental point – with (independent) coordinates given by the (1 × M) vector xi, the expected value reads μyi = xi b

20 243

stemming from Eq. (20.242), once in possession of x1 x2

X

20 244

xN together with xi

1 x1, i … xM − 1, i ; i = 1,2, …, N

20 245

– obviously consistent with Eq. (20.4); by the same token, its variance should reflect Eq. (20.241) as σ 2yi = σ 2 xi X T X

−1

xi T

20 246

The associated residual variable zi , defined as zi

yi − xi b σ

−1

xi X T X

xi T

N Z i ; 0, 1

20 247

and stemming from Eqs. (20.243) and (20.246), should accordingly follow a standard normal distribution, see Eqs. (18.253) and (18.258); its more frequent form has the true population variance, σ 2, replaced by its sample counterpart, s2 – referring to model predictions and abiding to Eq. (20.75), so one should resort to yi − xi b

zi s

xi X T X

−1

xi T

t Z i ; N −M;

α 2

20 248

that fully agrees with Eq. (18.495). Following isolation of yi , Eq. (20.248) will look like yi = xi b ± s

xi X T X

−1

xi T t N − M;

α , 2

20 249

963

964

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where t{N − M}; α/2} denotes the upper α/2 quantile of Student’s t-statistic characterized by N − M degrees of freedom (see Table 18.3); Eq. (20.249) represents the statistical confidence associated with the predicted response for each individual observation of the original dataset, so Student’s t-statistic appears fully appropriate. If a single new observation, yN + 1 , were instead under scrutiny – which was not included in the original dataset used to estimate parameters, then the associated variance would be given by σ 2 Var yN + 1 − yN + 1 = Var yN + 1 + Var yN + 1 , or else σ 2 = σ 2 + Var yN + 1 ; consequently, one should resort to s 2 = s2 + s2 yN + 1 > s2

20 250

in attempts to estimate the corresponding population variance – meaning that s 2 would then be used instead of s2 in Eq. (20.249), thus extending its validity beyond the departing dataset. A similar problem arises when attempting to estimate the joint inference interval associated to any (model) prediction y, calculated via the linear model conveyed by Eq. (20.3) and taking the whole dataset into account for parameter estimation – as long as it is described by a generic abcsissa lying within the range of the said experimental dataset; with estimates, entailed by (M × 1) column vector b as per Eq. (20.9), of true parameters materialized in (M × 1) column vector β after Eq. (20.6), as a function of (independent) coordinates 1, x1, …, xM−1 (or vector x, for that matter). In this case, xi in Eq. (20.245) should be replaced by a generic x, defined as x

1 x1 … xM −1 ,

20 251

thus giving rise to μy = xb

20 252

using Eq. (20.243) as template – and Eq. (20.246) should likewise be replaced by σ 2y = σ 2 x X T X

−1 T

x

20 253

In the case of simple linear regression analysis, the analogue of Eq. (20.249) will accordingly read y = xb ± s

x XTX

−1 T x

M F M, N − M; α ,

20 254

which supports a continuous inference band – in much the same way a (continuous) joint inference interval for all parameter estimates considered together was previously produced leading to Eq. (20.79); while discrete marginal inference intervals for said parameters followed Eq. (20.85) that resembles Eq. (20.249) in functional form. The inference band is typically wider than the inference interval for any single (experimental) point – because a joint inference interval for all points of the dataset is indeed brought into play.

20.6

Multivariate Correction

The fact that the probability of occurrence of a set of events considered independently differs from the probability of all those events taking place simultaneously has been referred to above – namely, in attempts to calculate a joint inference interval for parameter estimates, see Eq. (20.79), or an inference band for model predictions, see Eq. (20.254); a more detailed justification of the multivariate procedure is now in order.

Linear Regression

In this regard, denote as A1 the event that the inference interval of parameter β1 as per Eq. (20.79) does not cover b1 at the α level of significance, i.e. P{A1} = α; and likewise denote as A2 the event that the inference interval of parameter β2, also abiding to Eq. (20.79), does not cover b2 also at the α level of significance, i.e. P{A2} = α. The probability that both estimates fall simultaneously within the corresponding inference intervals will then be given by P A1 A2 , according to P A1 A2 = 1 − P A1 + P A2

+ P A1 A2

20 255

– where the overhead bar, –, refers to complementary event, i.e. P A1 = 1 − P A1 and P A2 = 1 − P A2 . Given that P{A1 A2} ≥ 0 accounts for some putative overlay, one readily concludes that P A1 A2 ≥ 1 − P A1 −P A2 = 1− 2α,

20 256

based on Eq. (20.255) and the hypothesized P{A1} = P{A2} = α; this reasoning may be extended to the whole set of M parameters, thus giving rise to P A1 A2

AM ≥ 1 −P A1 −P A2 −

− P AM = 1 − Mα

20 257

Since prediction via a (linear) model is at stake here, the decrease in amplitude of the significance level associated with N experimental observations, fitted by a model containing M parameters, should be taken into account – meaning that Eq. (20.249) might actually be replaced by yi = xi b ± s

xi X T X

−1

xi T t N − M;

α , 2M

20 258

in attempts to obtain the 1 − α inference band encompassing M parameters considered as an ensemble. Such a widening of inference intervals was originally proposed by Carlo E. Bonferroni – an Italian mathematician of the middle twentieth century, and Eq. (20.257) is indeed known as Bonferroni’s theorem. Since theoretical prediction(s) rather than new experimental information is under scrutiny, the number of degrees of freedom of the accompanying t-statistic is to be kept (i.e. equal to N – M). Equation (20.258) provides a conservative prediction of combined inference intervals associated with the inequality sign in Eq. (20.257); however, their width becomes rapidly larger as N grows (i.e. the significance level declines rapidly with N) – so said interval will hardly be useful in most situations of practical interest. To circumvent this limitation – found in attempts to estimate inference intervals for the model prediction at a given experimental point, as well as inference intervals for the parameters themselves and inference bands for any interpolated estimate by the model, one may resort to the approach devised in 1929 by Holbrook Working (an American economics statistician) and Harold Hotelling (an American mathematical statistician); it departs from an F-statistic, and provides a more accurate justification for the form of Eq. (20.254), as well as of Eq. (20.79). This method of simultaneous estimation in linear models considers all possible values of the independent variables, and thus resembles the method developed by American statistician Henry Scheffé pertaining to all possible contrasts; it outperforms Bonferroni’s correction at large number of degrees of freedom, yet both methods are quite general and suitable for both multivariate estimation of parameters of, and predictions by (linear) models.

965

966

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

To elucidate Working’ and Hotelling’s method, one should start by defining the beta distribution as Β X; α, β

x α −1 1− x B α, β

β −1

,

20 259

with denominator given by Eq. (18.505) – pertaining to random variable X, and characterized by parameters α and β. If X and Y denote χ 2-distributed random variables, with ν1 and ν2 degrees of freedom, respectively, one finds that X/(X + Y) is Β-distributed with ν1/2 and ν2/2 degrees of freedom, i.e. χ 2 X;ν1 χ Y ; ν2 2

Β

X ν1 ν2 ; , X +Y 2 2

20 260

– thus serving as basis for the definition conveyed by Eq. (20.259). In fact, the joint probability density function at stake reads ν1

−1

x

ν2

y

e − 2 y 2 −1 e − 2 D X, Y = ν1 , ν2 ν1 ν2 22 Γ 22 Γ 2 2 x2

20 261

stemming directly from Eq. (18.420) on the hypothesis that X and Y are independent of each other. To attain the distribution of X/(X + Y) as per Eq. (20.260), it is convenient to formally define a new random variable, Z, as x x+y

z

20 262

and a second random variable, W, according to w

x+y

20 263

Insertion of Eq. (20.263) allows transformation of Eq. (20.262) to x z= w or, equivalently, x = zw;

20 264

20 265

while combination of Eqs. (20.263) and (20.265) gives rise to w = zw + y,

20 266

with isolation of y unfolding y = w 1−z

20 267

along with factoring out of w. The absolute value of the associated Jacobian determinant reads

J

∂x ∂z ∂y ∂z

∂x w z ∂w = = w 1−z − −w z = w−wz + wz = w = w, ∂y −w 1−z ∂w 20 268

Linear Regression

arising from Eqs. (20.265) and (20.267), together with the definition of a second-order determinant; the joint probability density function D{Z,W}, simultaneously dependent on z and w, may then be calculated via J ,

D Z, W = D X, Y

20 269

with combination with Eqs. (20.261), (20.265), (20.267), and (20.268) producing ν1

ν2

x −1 − 2

y

e y 2 − 1e − 2 D Z, W = ν1 ν 1 ν2 ν 2 22 Γ 22 Γ 2 2 x2

zw

=w

ν1

ν1 zw 2 −1 e − 2

22 Γ

w x, y

zw, w 1− z ν2 w 1− z 2 −1 − 2 e

w 1−z ν2

ν1 2

22 Γ

20 270

ν2 2

After splitting powers and lumping factors alike, Eq. (20.270) transforms to ν1

D Z, W =

wz 2

−1 ν1

ν1

22 Γ ν1

=

=

z2

−1

ν1 z 2 −1

zw ν2 −1 − 2 e w 2 −1

w2

ν1 2

1−z ν2

22 Γ

ν2 w − wz 2 − 1e − 2

ν2 2

ν2 ν1 ν2 w − wz + wz 2 −1 w1 + 2 − 1 + 2 −1 e − 2 ν1 ν2 ν1 ν2 Γ 22 + 2 Γ

1 −z

2

1−z

,

20 271

2

ν2 ν1 ν2 w 2 −1 w 2 + 2 −1 e − 2

ν1 ν2 22 + 2 Γ

ν1 ν2 Γ 2 2

where a final multiplication and division of the right-hand side by Γ{ν1/2 + ν2/2} yields ν1 ν2 + ν1 2 2 z 2 −1 1 − z D Z, W = ν1 ν2 Γ Γ 2 2 Γ

ν1 ν2 w ν2 w 2 + 2 − 1e − 2 − 1 2 ν1 ν2 ν1 ν2 + 22 + 2 Γ

2

;

20 272

2

in view of Eq. (20.259) with α = ν1/2 and β = ν2/2, one may reformulate Eq. (20.272) to ν1

D Z, W =

z2

−1

ν2

1−z 2 ν1 ν2 , Β 2 2

−1

ν1 + ν2 w 2 −1 e − 2 ν1 + ν2 ν1 + ν2 2 2 Γ

w

= Β Z;

ν1 ν2 2 χ W ; ν1 + ν2 , , 2 2

2

20 273

967

968

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

where advantage was taken from Eq. (18.420) with ν = ν1 + ν2, and Eq. (18.505). Inspection of Eq. (20.273) indicates that the joint distribution of Z and W appears as the product of a Β-distribution of Z, i.e. X , X +Y

Z

20 274

consistent with Eq. (20.262), by the χ 2-distribution of W, defined as X +Y,

W

20 275

in agreement with Eq. (20.263). While the latter distribution is anticipated due to Eq. (18.474) and (20.275) – since X and Y are both χ 2-distributed as per Eq. (20.260), the Β-distribution of Z as per Eq. (20.274) confirms validity of Eq. (20.260). The beta distribution possesses a number of useful features; for instance, Β U;

ν1 ν2 , 2 2

ν2 U ; ν1 , ν2 ν1 1 − U

F

20 276

Equation (20.228) may be proven via definition of an auxiliary variable v via u

ν1 v , ν1 v + ν2

20 277

which becomes ν1 v = u ν1 v + ν2 = ν1 uv + ν2 u

20 278

upon elimination of denominators. After factoring ν1 and v out, Eq. (20.278) turns to ν1 v −ν1 uv = ν1 1 − u v = ν2 u,

20 279

which ends up as v=

ν2 u ν1 1 −u

20 280

following isolation of v. The differential of u may then be obtained from Eq. (20.277) as du =

ν1 ν1 v + ν2 −ν1 vν1 ν1 v + ν2

2

dv =

ν12 v + ν1 ν2 −ν12 v ν1 v + ν2

2

dv =

ν1 ν2 ν1 v + ν2

2 dv,

20 281

via the rule of differentiation of a quotient, coupled to removal of parenthesis and cancellation of symmetrical terms afterward; the probability of U lying within [u, u + du] may equivalently be expressed as the probability of V being comprised between v and v + dv, according to D U du = D V dv, where insertion of Eq. (20.281) yields ν1 ν2 dv = D V dv D U ν1 v + ν2 2

20 282

20 283

– or merely D V =

ν1 ν2 ν1 v + ν2

2

D U ,

20 284

Linear Regression

after dropping dv from both sides. Insertion of Eq. (20.259) – since D{U} is Β-distributed with ν1/2 and ν2/2 degrees of freedom as per Eq. (20.276), and then of Eq. (20.277) convert Eq. (20.284) to ν1 ν2 D V = ν1 v + ν2

=

ν1

u2 2

ν1 ν2 ν1 v + ν2

−1

1 −u ν1 ν2 , Β 2 2

ν1 v ν1 v + ν2 2

ν2 −1 2

u

ν1 v ν1 v + ν2

ν1 −1 2

ν1 v 1− ν1 v + ν2 ν1 ν2 , Β 2 2

ν2 −1 2

,

20 285

where conversion of the second factor in numerator to a single fraction and dropping of symmetrical terms afterward, followed by splitting of powers unfold

D V =

ν1 ν2 ν1 v + ν2

ν1 ν2 ν1 v + ν2

= Β

ν1 ν2 , 2 2

2

ν1 −1 2

ν1 v + ν2 − ν1 v ν1 v + ν2 ν1 ν2 , Β 2 2

ν1 ν1 ν2 −1 −1 −1 ν1 ν2 ν1 2 v 2 ν2 2 ν1 −1 ν1 v + ν2 2 ν1 v + ν2 2

ν2 −1 2

;

ν1 v + ν2

(20.286)

ν2 −1 2

condensation of factors alike eventually generates ν1 ν2 ν1 2 ν2 2

D V = Β

ν1 ν2 , 2 2

v

ν1 2 −1

ν1 ν2 ν1 v + ν2 2 + 2

=

ν1 ν2

ν1 2

ν1 ν2 , Β 2 2

ν1

v2

−1

ν1 1+ v ν2 ν1

ν1 + ν2 , 2

20 287

ν2

where division of both numerator and denominator by ν2 2 and ν2 2 meanwhile took place. Inspection of Eq. (20.287) vis-à-vis with Eq. (18.570) indicates that D V = F V ; ν1 , ν2 ,

20 288

thus corroborating Eq. (20.276) – since V

ν2 U ν1 1 −U

20 289

accompanies Eq. (20.280). Note that Eq. (20.276) may be rewritten as F ν1 , ν2

ν1 ν2 , 2 2 = ν1 ν2 , ν1 1 −Β 2 2 ν2 Β

20 290

969

970

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

with the aid of Eq. (20.289), which becomes ν1 ν2 , ν1 1 −Β 1 2 2 = ν1 ν2 F ν1 ,ν2 , ν2 Β 2 2

20 291

after taking reciprocals of both sides – or else ν1 ν2 , ν2 2 2 = ν1 ν2 ν F ν1 , ν2 1 , Β 2 2

1 −Β

20 292

upon multiplication of both sides by

ν2 ; furthermore, Eq. (18.626) may be taken advanν1

tage of to transform Eq. (20.292) to ν1 ν2 , 2 2 = ν2 F ν , ν 2 1 ν1 ν2 ν1 , Β 2 2

1 −Β

20 293

– since the reciprocal of an F-distribution reverses the order of its characteristic degrees of freedom, as proven previously. Consider now Wishart’s distribution – named after John Wishart, a Scottish mathematician and agricultural statistician, who formulated it in 1928. If X denotes an (N × M) matrix – each row of which is independently drawn from an N-variate normal distribution with mean 0 and covariance Σ, then Wishart’s distribution represents the probability of occurrence of an (M × M) random matrix XTX, known as scatter matrix, such that Y

XTX

W Y ; M, N, Σ ;

20 294

here N denotes (total) number of degrees of freedom, obviously not below M. The accompanying probability density function resembles a likelihood function, and reads N −M − 1 2

Y

W Y ; M, N, Σ = 2

MN 2

N

Σ 2 ΓM

exp −

N 2

tr Σ − 1 Y 2

,

20 295

where Γ M denotes the multivariate gamma function – defined as ΓM

N 2

π

M M −1 4

M

N 1−i + 2 2

Γ i=1

20 296

When M = 1, and both Y and Σ reduce to (1 × 1) matrices (or plain scalars) – the latter with unit element, then Eq. (20.295) degenerates to W Y ; M, N, Σ

Y

N −1−1 2 exp

1

N 1 1−1 2π 4

= M=1 Y =Y Σ=1

N 22

1



tr 1Y 2

N 1−i + Γ 2 2 i=1

=

Y

N −2 Y 2 e− 2

N

22 Γ

N 2

=

Y

N Y 2 −1 e − 2

N

22 Γ

N 2 20 297

Linear Regression

with the aid of Eq. (20.296), together with realization that the determinant and the trace of a (1 × 1) matrix coincide with its (single) element; hence, one concludes that W Y ; M, N, Σ

M=1 Y =Y Σ=1

= χ2 Y ; N ,

20 298

in view of Eq. (18.420). Therefore, Wishart’s distribution is but the generalization of the χ 2-distribution to multiple dimensions. In view of the result conveyed by Eq. (20.298), one may now proceed to the multivariate analogue of Eq. (20.260), coined in terms of independent square matrix variables W1 and W2, both following Wishart’s distribution for M ≥ P – with P denoting number of dimensions, M denoting error degrees of freedom, and N denoting hypothesis degrees of freedom in the context of likelihood-ratio tests; but sharing the same scaling matrix Σ, and asserting that W1

W P, M, Σ

W2

W P, N, Σ

W1 W1 +W2

Λ P, M, N

20 299

Equation (20.299) serves as definition of Wilks’ distribution, Λ, in much the same way Eq. (20.260) supported definition of the Β-distribution; the former was proposed by American statistician Samuel S. Wilks. For practical calculation purposes, M. S. Bartlett suggested log Λ P, M, N ≈

χ 2 NP , P −N + 1 −M 2

20 300

which becomes a better and better approximation as M increases; in exact terms, however, one has it that P

Β

Λ P, M, N = i=1

M + 1−i N , 2 2

20 301

Setting P = 1 leads, in particular, to 1

Λ 1, M, N =

Β i=1

M + 1 −i N , 2 2

20 302

from Eq. (20.301), or else Λ 1, M, N = Β

M N , 2 2

20 303

– thus suggesting that Wilks’ distribution is a generalization of Β-distribution, as somehow expected. Inspection of the parameter functionality of Wilks’ distribution as per Eq. (20.299) reveals the absence of Σ as such; to prove that said distribution is indeed independent of Σ (even though the same Σ is required as covariance matrix for both W1 and W2), consider a (non-nil) matrix A – and define linear transformants W 1 and W 2 as W1

AW 1

20 304

971

972

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

and AW 2

W2

20 305

When the underlying number of degrees of freedom is large, Eq. (18.451) applies – knowing that the source matrices had rows independently drawn from a multivariate normal distribution, as per Eq. (20.294); therefore, the validity conditions for Eq. (18.235) are met. Under these circumstances, one concludes from Eqs. (20.299), (20.304), and (20.305) that the covariance matrix associated with either W 1 or W 2 is AΣAT, so one may state that W1

W P, M, AΣA T ,

20 306

W P, N, AΣA T

20 307

and likewise W2

– where W1 being independent of W2 implies independence between W 1 and W 2 as well, despite covariance matrix AΣAT being shared by both; one promptly realizes that W1 W1 +W2

Λ P, M, N

20 308

from Eq. (20.299), where Λ denotes Wilks’ distribution associated specifically to matrices W 1 and W 2 . On the other hand, Eq. (20.308) may be algebraically manipulated, with the aid of Eqs. (20.304) and (20.305), so as to read Λ=

W1 W1 +W2

=

AW 1 A W1 A W1 = , = AW 1 + AW 2 A W1 +W2 A W1 +W2

20 309

since the determinant of a product of matrices equals the product of determinants of the said matrices, coupled with the distributive property of multiplication of matrices. After cancelling |A| 0 between numerator and denominator, Eq. (20.309) reduces to Λ=

W1 W1 +W2

=

W1 = Λ, W1 +W2

20 310

thus indicating that identical Wilks’ distributions are at stake (i.e. Λ = Λ), at least when a large number of degrees of freedom is at stake. Since the covariance matrix Σ associated with W1 and W2 differs, in general, from the covariance matrix AΣAT associated with W 1 and W 2 , Eq. (20.310) confirms that Wilk’s distribution cannot depend on the actual covariance matrices – as initially hypothesized. Consider now an ( f × f ) matrix W, abiding to W

W f,f,Σ ,

20 311

and partitioned into (r × r) matrix W1,1, (r × s) matrix W1,2, (s × r) matrix W2,1, and (s × s) matrix W2,2 as blocks, according to W

W 1, 1 W 1, 2 W 2, 1 W 2, 2

20 312

Linear Regression

with r + s = f ; and the associated covariance matrix Σ similarly partitioned into blocks, viz. Σ

Σ1, 1 Σ1, 2 Σ2, 1 Σ2, 2

20 313

– with off-diagonal submatrices abiding to Σ 1, 2

0r × s

20 314

Σ 2, 1

0s × r ,

20 315

and so as to guarantee independence between W1,1 and W2,2, while taking |W1,1| |W2,2| 0. Based on the result conveyed by Eq. (6.154), one finds that

0 and

W 1, 1 − W 1, 2 W 2, 2− 1 W 2, 1 W 2, 2 W 1, 1 − W 1, 2 W 2, 2− 1 W 2, 1 W = = , W 1, 1 W 2, 2 W 1, 1 W 1, 1 W 2, 2 20 316 where (non-nil) determinant |W2,2| was meanwhile cancelled between numerator and denominator; Eq. (20.316) may be further manipulated to read W X1 X1 = = , W 1, 1 X1 + X2 W 1, 1 W 2, 2

20 317

provided that X1

W 1, 1 − W 1, 2 W 2, 2− 1 W 2, 1

20 318

X2

W 1, 2 W 2, 2−1 W 2, 1

20 319

and – where W1,1 may indeed be obtained via ordered addition of Eqs. (20.318) and (20.319). By the same token, Eq. (6.159) may be retrieved to write W 2, 2 − W 2, 1 W 1, 1− 1 W 1, 2 W 1, 1 W 2, 2 − W 2, 1 W 1, 1− 1 W 1, 2 W = = , W 1, 1 W 2, 2 W 2, 2 W 1, 1 W 2, 2 20 320 where (non-nil) determinant |W1,1| was as well dropped from both numerator and denominator; following definition of auxiliary matrices Y1 and Y2 as Y1

W 2, 2 − W 2, 1 W 1, 1−1 W 1, 2

20 321

Y2

W 2, 1 W 1, 1− 1 W 1, 2 ,

20 322

and

one may redo Eq. (20.320) to W Y1 Y1 = = , W 2, 2 Y1 +Y2 W 1, 1 W 2, 2

20 323

at the expense of ordered addition of Eqs. (20.321) and (20.322) generating W2,2 from Y1 W and Y2. Elimination of between Eqs. (20.317) and (20.323) gives then rise to W 1, 1 W 2, 2

973

974

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

X1 Y1 = X1 + X2 Y1 +Y2

20 324

In view of the analogy in form of W and Σ as per Eqs. (20.312) and (20.313), the covariance matrix of X1 will read Σ1,1 − Σ1,2Σ2, 2− 1 Σ2,1 = Σ1,1 using Eq. (20.318) as template – where either Eq. (20.314) or Eq. (20.315) was used to advantage; on the other hand, X1 + X2 ≡ W1,1 as per Eqs. (20.318) and (20.319), meaning that the covariance matrix of X1 + X2 should be equal to Σ1,1 – and thus the covariance matrix of X2 should also coincide with Σ1,1. By the same token, the covariance matrix of Y1 should look like Σ2,2 − Σ2,1Σ1, 1−1 Σ1,2 = Σ2,2 that mimics Eq. (20.321) in form – with simplification made possible due to either Eq. (20.314) or Eq. (20.315); while the covariance matrix of Y1 + Y2 ≡ W2,2, based on Eqs. (20.321) and (20.322), should also read Σ2,2 – and consequently Y2 should exhibit Σ2,2 for covariance matrix as well. In view of the above, one realizes that X1

W r, f −s, Σ1, 1

20 325

and likewise 20 326 X 2 W r, s, Σ1, 1 , after recalling that f = r + s with (r × r) matrix W1,1 used as departure point; due to their mutual independence arising from Eqs. (20.318) and (20.319), coupled with their common covariance matrix Σ1,1, one realizes that Eq. (20.299) applies to X1 and X2, and so X1 = Λ r, f −s, s X1 + X2

20 327

By the same token, Y1

W s, f − r, Σ2, 2

20 328

W s, r, Σ2, 2 ,

20 329

as well as Y2

since f = r + s as seen before, with (s × s) matrix W2,2 serving as reference in this case; combination of Eqs. (20.328) and (20.329) with Eq. (20.299) then supports Y1 Y1 +Y2

Λ s, f − r, r ,

20 330

since Y1 and Y2 are independent from each other as per Eqs. (20.321) and (20.322) – besides sharing their covariance matrix, Σ2,2. Insertion of Eqs. (20.327) and (20.330) finally converts Eq. (20.324) to Λ r, f −s, s = Λ s, f − r, r

20 331

– which entails a unique symmetry rule for Wilks’ distribution. Consider now a normally distributed variable Y, with mean μ and variance σ 2 following Eq. (18.122); definition of lumped variable (Y − μ)/σ, as conveyed by Eq. (18.253), has led to the standard normal distribution satisfying Eq. (18.258). When population variance is not known (as often happens), σ 2 can be approximated by sample variance s2, given by

Linear Regression

Eq. (19.12) as seen previously – which is χ 2-distributed, since s is normally distributed itself, in agreement with Eqs. (18.469) and (18.474); this can be coined as Y −μ

t N −1 ,

s2 N

20 332

in agreement with Eq. (18.495) and Table 19.2. After squaring both sides and performing elementary algebraic rearrangement, Eq. (20.332) becomes 2

Y −μ

=

s2 N

Y −μ s2 N

2

= N Y −μ

1 Y −μ s2

t2 N − 1

20 333

in agreement with Eq. (20.332) – which is but an F-distributed statistic, given by Eq. (18.637), i.e. F{1, N – 1}. This reasoning may be directly extrapolated so as to encompass the multivariate situation characterized by mean μ, i.e. T2

N Y − μ T S −1 Y − μ ,

20 334

where T2 denotes Hotelling’s t2-statistic, S denotes (N × N) sample covariance matrix as estimator of true covariance matrix Σ, and Y – μ denotes (N × 1) vector containing the values of the dependent variable in the N experiments abstracted of the corresponding means; without loss of generality, consider that μ = 0 – in which case Eq. (20.334) formally simplifies to T2

N Y T S −1 Y

20 335

The case of μ 0 may be easily accommodated by defining Z as Y − μ – again a random variable, which exhibits a statistical distribution identical to that of Y, for given (and thus constant) μ. Consider now an auxiliary ((N + 1) × (N + 1)) block matrix Q, defined as Q

S Y

T

−Y 1

;

20 336

recalling Eq. (6.154) with A1,1 = S, A1,2 = −Y, A2,1 = YT, and A2,2 = 1, one may calculate the determinant of Q as Q = S − −Y 1 − 1 Y T 1 = S + YY T ,

20 337

since the inverse of a matrix reduced to a single scalar coincides with the (arithmetic) reciprocal of said scalar. After expanding |Q| as per Eq. (6.159), one would instead get Q = 1 − Y T S −1 − Y

S = 1 + Y T S −1 Y S = 1 + Y T S −1 Y S

20 338

based again on Eq. (20.336) – where the fact that YTS−1Y is a scalar (and so is its sum with unity) was taken advantage of to replace the (vertical) bar notation of the determinant by a plain parenthesis; insertion of Eq. (20.335) permits transformation of Eq. (20.338) to Q = 1+

T2 N

S

20 339

975

976

Mathematics for Enzyme Reaction Kinetics and Reactor Performance

Elimination of |Q| between Eqs. (20.337) and (20.339) gives then rise to S + YY T = 1 +

T2 N

S,

20 340

whereas isolation of the two determinants in the right side unfolds 1 S 20 341 2 = T S + YY T 1+ N On the other hand, Y consists, by hypothesis, of a column vector of normally distributed (experimental) data, [y1 y2 … yN]T, with (true) covariance matrix Σ; hence, YYT consists of a symmetric (N × N) matrix, containing the product of every yi by every other yj – and each such product will necessarily follow a χ 2-distribution, consistent with Eq. (18.469). Since Wishart’s distribution is the multivariate version of univariate χ 2, see Eq. (20.298), one may safely infer that N 0, Σ

Y

Y TY

W 1, M, Σ

YY

W M,1, Σ

T

20 342

in agreement with Eq. (20.294), and where M denotes number of parameters in underlying model required to calculate μ. By the same token, S itself follows Wishart’s distribution owing to the definition of sample covariance matrix, i.e. S

W M, N, Σ

20 343

with N total degrees of freedom justified by the total number of experimental data points (and N > M obviously). In view of Eqs. (20.299), (20.342), and (20.343), one may transform Eq. (20.341) to 1 = Λ M, N,1 ; T2 1+ N Eq. (20.344) may, in turn, be redone as

20 344

1 = Λ 1, N −M + 1, M 20 345 T2 1+ N in view of the property conveyed by Eq. (20.331) – after setting r = M and s = 1, as well as f – s = N that unfolds f = N + s = N + 1. The identity labeled as Eq. (20.303) prompts, in turn, reformulation of Eq. (20.345) to 1 N −M + 1 M , ; 20 346 =Β 2 2 T2 1+ N after taking reciprocals of both sides and then adding −1 thereto, Eq. (20.346) becomes N −M + 1 M , T 1 2 2 = −1 = N −M + 1 M N −M + 1 M N , , Β Β 2 2 2 2 2

1−Β

20 347

Linear Regression

Comparison of the right-hand side of Eq. (20.347) with the left-hand side of Eq. (20.293) indicates that T2 M F M, N − M + 1 , = N N −M + 1 after setting ν1 ≡ N − M + 1 and ν2 ≡ M; isolation of T2 finally yields T2 =

N M F M, N − M + 1 N −M + 1

20 348

20 349

If the number of experimental points, N, is much larger than the number of parameters, M, then the ratio of N to N – M + 1 tends to unity, besides N – M + 1 ≈ N – M as second parameter of Fisher’s distribution; under these circumstances, Eq. (20.349) is driven by lim T 2 = M F M, N −M

N



20 350

This is indeed the situation with highest practical interest, for which the F-statistic as per Eq. (20.79) is recommended rather than the t-statistic in Eq. (20.78) if Bonferroni’s correction were followed; after taking square roots of both sides, Eq. (20.350) becomes t N −M

N

M

=

M F M, N − M ,

20 351

thus justifying the functional form of Eq. (20.254) vis-à-vis that used in Eq. (20.249). Note that t{N – 1} in Eq. (20.333) was meanwhile replaced by t{N − M} in Eq. (20.351) – and this change accompanies replacement of s2 as per Eq. (20.86) by s2 as per Eq. (20.75).

977

979

Further Reading Abramowitz, M. and Stegun, I. (eds.) (1964). Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables. New York, NY, USA: Dover. Aitchison, J. and Brown, J. A. C. (1957). The Lognormal Distribution. Cambridge, UK: Cambridge University Press. Akopyan, A. V. and Zaslavsky, A. A. (2007). Geometry of Conics. Washington, DC, USA: American Mathematical Society. Andrews, G. E., Askey, R., and Roy, R. (1999). Special Functions. Cambridge, UK: Cambridge University Press. Apostol, T. M. (1967). Calculus. Waltham, MA, USA: Blaisdell. Apostol, T. M. (1974). Mathematical Analysis. Reading, MA, USA: Addison-Wesley. Ayres, F. and Mendelson, E. (2008). Calculus. New York, NY, USA: McGraw-Hill. Ballantine, C. and Roberts, J. (2002). A simple proof of Rolle’s theorem for finite fields. American Mathematical Monthly 109: 72–74. Bartlett, M. S. (1954). A note on multiplying factors for various χ 2 approximations. Journal of Royal Statistical Society Series B 16: 296–298. Bayin, S. S. (2006). Mathematical Methods in Science and Engineering. New York, NY, USA: Wiley. Bender, C. M. and Orszag, S. A. (1999). Advanced Mathematical Methods for Scientists and Engineers: Asymptotic Methods and Perturbation Theory. New York, NY, USA: Springer. Beveridge, G. S. G. and Schechter, R. S. (1970). Optimization: Theory and Practice. New York, NY, USA: McGraw-Hill. Beyer, W. H. (1987). CRC Standard Mathematical Tables. Boca Raton, FL, USA: CRC Press. Bliss, C. I. (1934). The method of probits. Science 79: 38–39. Boltyanskii, V. G., Gamkrelidze, R. V., and Pontryagin, L. S. (1956). On the theory of optimal processes. Doklady Akademii Nauk SSSR 110: 7–10. Bolz, R.E. and Turve, G.L. (eds.) (1973). CRC Handbook of Tables for Applied Engineering Science. Boca Raton, FL, USA: CRC Press. van den Bos, A. (2007). Parameter Estimation for Scientists and Engineers. Hoboken, NJ, USA: Wiley. Bowman, F. (1958). Introduction to Bessel Functions. New York, NY, USA: Dover. Boyer, C. B. (1989). A History of Mathematics. New York, NY, USA: Wiley. Boyer, C. B. (2004). History of Analytical Geometry. New York, NY, USA: Dover. Bromwich, T. J. l’A. and MacRobert, T. M. (1991). An Introduction to the Theory of Infinite Series. New York, NY, USA: Chelsea. Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

980

Further Reading

Burmann, H. H. (1799). Essai de calcul fonctionnaire aux constantes ad libitum. Memoires de l’Institut Nationale des Sciences et des Arts – Science, Mathematique, Physique 2: 13–17. Burnham, K. P. and Anderson, D. R. (2002). Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. New York, NY, USA: Springer-Verlag. Carlitz, L. (1963). The inverse of the error function. Pacific Journal of Mathematics 13: 459–470. Carlslaw, H. S. and Jaeger, J. C. (1963). Operational Methods in Applied Mathematics. Dover, NH, USA: Dover. Cartwright, J. H. E. and Piro, O. (1992). The dynamics of Runge-Kutta methods. International Journal of Bifurcations Chaos 2: 427–449. Casey, J. (1996). Exploring Curvature. Wiesbaden, Germany: Vieweg. Comenetz, M. (2002). Calculus: The Elements. Singapore: World Scientific. DasGupta, A. (2010). Fundamentals of Probability: A First Course. New York, NY, USA: Springer-Verlag. Davis, P. J. (1959). Leonhard Euler’s integral: a historical profile of the gamma function. American Mathematics Monthly 66: 849–869. Dhrymes, P. J. (1978). Mathematics for Econometrics. New York, NY, USA: Springer-Verlag. Eberlein, W. F. (1977). On Euler’s infinite product for the sine. Journal of Mathematical Analysis and Applications 58: 147–151. Euler, L. (1738). De progressionibus transcendentibus seu quarum termini generales algebraice dari nequeunt. Commentarii Academiae Scientiarum Petropolitanae 5: 36–57. Eves, H. (1963). A Survey of Geometry. Boston, MA, USA: Allyn & Bacon. Fanchi, J. R. (2006). Math Refresher for Scientists and Engineers. New York, NY, USA: Wiley. Feller, W. (2008). An Introduction to Probability Theory and Its Applications. Hoboken, NJ, USA: Wiley. Fine, N. J. (1988). Basic Hypergeometric Series and Applications. Providence, RI, USA: American Mathematical Society. Fisher, R. A. (1925). Applications of Student’s t-distribution. Metron 5: 3–17. Fisher, R. A. (1925). Expansion of Student’s integral powers of n-1. Metron 5: 22–32. Fisher, R. A. (1948). Statistical Methods for Research Workers. Edinburgh, UK: Oliver & Boyd. Forsythe, G. E., Malcolm, M. A., and Moler, C. B. (1977). Computer Methods for Mathematical Computations. Englewood Cliffs, NJ, USA: Prentice-Hall. Friedmann, T. and Hagen, C. R. (2015). Quantum mechanical derivation of the Wallis formula for pi. Journal of Mathematical Physics 56: 112101. Fuller, A. T. (1963). Bibliography of Pontryagin’s maximum principle. Journal of Electronics and Control 15: 513–517. Gasper, G. and Rahman, M. (1990). Basic Hypergeometric Series. Cambridge, UK: Cambridge University Press. Geering, H. P. (2007). Optimal Control with Engineering Applications. New York, NY, USA: Springer. Golub, G. H. and van Loan, C. F. (1983). Matrix Computations. Baltimore, MA, USA: John Hopkins University Press. Gradshteyn, I. S. and Ryzhik, I. M. (1980). Tables of Integrals, Series, and Products. San Diego, CA, USA: Academic Press. Graybill, F. A. (1983). Matrices with Applications in Statistics. Belmont, CA, USA: Wadsworth International. Graybill, F. A. and Bowden, D. C. (1967). Linear segment confidence bands for simple linear models. Journal of American Statistical Association 62: 403–408.

Further Reading

Grossman, S. I. (1986). Multivariate Calculus, Linear Algebra and Differential Equations. Cambridge, MA, USA: Academic Press. Guimarães, R. C. and Cabral, J. A. S. (1997). Estatística. Lisboa, Portugal: McGraw-Hill. Hairer, E. and Wanner, G. (1996). Solving Ordinary Differential Equations. II: Stiff and Differential-Algebraic Problems. Berlin, Germany: Springer-Verlag. Hairer, E., Norsett, S. P., and Wanner, G. (1993). Solving Ordinary Differential Equations. I: Nonstiff Problems. Berlin, Germany: Springer-Verlag. Hamming, R. W. (1991). The Art of Probability for Engineers and Scientists. Boston, MA, USA: Addison-Wesley. Harris, J. W. and Stocker, H. (1998). Handbook of Mathematics and Computational Science. New York, NY, USA: Springer. Helmert, F. R. (1875). Ueber die Bestimmung des wahrscheinlichen Fehlers aus einer endlichen Anzahl wahrer Beobachtungsfheler. Zeitschrift fur Mathematik und Physik 20: 300–303. Helmert, F. R. (1876). Ueber die Wahrscheinlichkeit der Potenzsummen der Beobachtungsfehler und uber einige damit im Zusammenhange stehende Fragen. Zeitschrift fur Mathematik und Physik 21: 102–219. Hildebrand, F. B. (1956). Introduction to Numerical Analysis. New York, NY, USA: McGraw-Hill. Hotelling, H. (1931). The generalization of Student’s ratio. Annals of Mathematical Statistics 2: 360–378. Irving, J. and Mullineux, N. (1959). Mathematics in Physics and Engineering. New York, NY, USA: Academic Press. Jagdish, K. P. and Campbell, B. R. (1996). Handbook of the Normal Distribution. New York, NY, USA: Marcel Dekker. James, R. C. (1966). Advanced Calculus. Belmont, CA, USA: Wadsworth. Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review 106: 620–630. Jaynes, E. T. (1975). Information theory and statistical mechanics, II. Physical Review 108: 171–190. Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge, UK: Cambridge University Press. Jeffreys, H. and Jeffreys, B. S. (1988). Methods of Mathematical Physics. Cambridge, UK: Cambridge University Press. Johnson, R. and Bhattacharya, G. (1996). Statistics: Principles and Methods. Hoboken, NJ, USA: Wiley. Johnson, N. L., Kotz, S., and Balakrishnan, N. (1994). Continuous Univariate Distributions. Hoboken, NJ, USA: Wiley. Kaplan, W. (1992). Advanced Calculus. Reading, MA, USA: Addison-Wesley. Katz, V. J. (1998). A History of Mathematics: An Introduction. Reading, MA, USA: Addison-Wesley. Kirk, D. E. (1970). Optimal Control Theory: An Introduction. New York, NY, USA: Prentice Hall. Kline, M. (1998). Calculus: An Intuitive and Physical Approach. New York, NY, USA: Dover. Koepf, W. (2014). Hypergeometric Summation: An Algorithmic Approach to Summation of Special Function Identities. Braunschweig, Germany: Vieweg Verlag. Krantz, S. G. (2004). A Handbook of Real Variables: With Applications to Differential Equations. Boston, MA, USA: Birkhauser Boston.

981

982

Further Reading

Kreyszig, E. (1979). Advanced Engineering Mathematics. New York, NY, USA: Wiley. Lagrange, J. L. (1770). Nouvele méthode pour résoudre les équations littérales par le moyen des séries. Mémoires de l’Académie Royale des Sciences et Belles-Lettres de Berlin 24: 251–326. Lehmann, E. L. and Romano, J. P. (2005). Testing Statistical Hypotheses. New York, NY, USA: Springer-Verlag. Leipnik, R. B. (1991). On lognormal random variables: I – the characteristic function. Journal of Australian Mathematical Society Series B 32: 327–347. Lettenmeyer, F. (1936). Uber die sogenannte Hospitalsche Regel. Journal fur die Reine und Angewandte Mathematik 174: 246–247. Lyche, R. T. (1962). Matematisk Analyse II. Oslo, Norway: Gylendal Norsk Forlag. MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge, UK: Cambridge University Press. Magnus, J. R. and Neudecker, H. (1999). Matrix Differential Calculus with Applications in Statistics and Econometrics. New York, NY, USA: Wiley. Mardia, K., Kent, J. T., and Bibby, J. (1979). Multivariate Analysis. Cambridge, MA, USA: Academic Press. Mason, R., Lind, D., and Marchal, W. (1998). Statistics, an Introduction. Three Lakes, WI, USA: Cole Publishing. Melnikov, Y. A. (2011). Green’s Functions and Infinite Products. New York, NY, USA: Springer. Merserve, B. E. (1983). Fundamental Concepts of Geometry. New York, NY, USA: Dover. Miller, I. and Miller, M. (2004). John E. Freund’s Mathematical Statistics with Applications. Upper Saddle River, NJ, USA: Prentice Hall. Miller, R. G. (1966). Simultaneous Statistical Inference. New York, NY, USA: Springer-Verlag. Montgomery, D. (2009). Design and Analysis of Experiments. Hoboken, NJ, USA: Wiley. Mood, A., Graybill, F. A., and Boes, D. C. (1974). Introduction to the Theory of Statistics. New York, NY, USA: McGraw-Hill. Moore, D. (2003). Introduction to the Practice of Statistics. New York, NY, USA: W. H. Freeman. Neyman, J. and Pearson, E. S. (1933). On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of Royal Society A 231: 694–706. Olver, F.W.J., Lozier, D.W., Boisvert, R.F., and Clark, C.W. (eds.) (2010). NIST Handbook of Mathematical Functions. Cambridge, UK: Cambridge University Press. Ossendrijver, M. (2016). Ancient Babyonian astronomers calculated Jupiter’s position from the area under a time-velocity graph. Science 351: 482–484. Park, S. Y. and Bera, A. K. (2009). Maximum entropy autoregressive conditional heteroskedasticity model. Journal of Econometrics 150: 219–230. Pfeiffer, P. E. (1978). Concepts of Probability Theory. Mineola, NY, USA: Dover. Piskounov, N. (2000). Differential and Integral Calculus. Moscow, Russia: Hayka. Pontryagin, L. S., Boltyanskii, V. G., Gamkrelidze, R. V., and Mischenko, E. F. (1962). The Mathematical Theory of Optimal Processes. New York, NY, USA: Interscience. Poularikas, A.D. (ed.) (2000). The Transforms and Applications Handbook. Boca Raton, FL, USA: CRC Press. Press, W. H., Flannery, B. P., Tukolsky, S. A., and Vetterling, W. T. (1992). Numerical Recipes in FORTRAN: The Art of Scientific Computing. Cambridge, UK: Cambridge University Press.

Further Reading

Schervish, M. (1996). Theory of Statistics. New York, NY, USA: Springer-Verlag. Shaffer, J. P. (1995). Multiple hypothesis testing. Annual Reviews in Psychology 46: 561–584. Shannon, C. E. and Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL, USA: University of Illinois Press. Sharma, A. K. (2005). Application of Integral Calculus. New Delhi, India: Discovery Publishing House. Simon, M. K. (2002). Probability Distributions Involving Gaussian Random Variables. New York, NY, USA: Springer-Verlag. Singh, R. R. (1993). Engineering Mathematics. New York, NY, USA: McGraw-Hill. Spain, B. and Smith, M. G. (1970). Functions of Mathematical Physics. London, UK: van Nostrand Reinhold. Spivak, M. (1994). Calculus. Houston, TX, USA: Publish or Perish. Steel, R. G. D. and Torrie, J. H. (1960). Principles and Procedures of Statistics with Special Reference to the Biological Sciences. New York, NY, USA: McGraw-Hill. Steinbrechter, G. and Shaw, W. T. (2008). Quantile mechanics. European Journal of Applied Mathematics 19: 87–112. Stephanopoulos, G. (1984). Chemical Process Control: An Introduction to Theory and Practice. Englewood Cliffs, NJ, USA: Prentice-Hall. Stephenson, G. (1973). Mathematical Methods for Science Students. London, UK: Longman. Stoer, J. and Bulirsch, R. (1980). Introduction to Numerical Analysis. New York, NY, USA: Springer-Verlag. Stuart, A., Ord, K., and Arnold, S. (1999). Kendall’s Advanced Theory of Statistics: Volume 2 – Classical Inference and the Linear Model. Hoboken, NJ, USA: Wiley. Student (1908). The probable error of a mean. Biometrika 6: 1–25. Taylor, A. E. (1952). L’Hôpital rule. American Mathematic Monthly 59: 20–24. Temme, N. M. (1996). Special Functions: An Introduction to the Classical Functions of Mathematical Physics. New York, NY, USA: Wiley. Trahan, D. H. (1969). The mixed partial derivatives and the double derivative. American Mathematical Monthly 76: 76–77. Triola, M. (2001). Elementary Statistics. Boston, MA, USA: Addison-Wesley. Varadarajan, V. S. (2007). Euler and his work on infinite series. Bulletin of American Mathematical Society 44: 515–539. Wastlund, J. (2007). An elementary proof of the Wallis product formula for pi. American Mathematical Monthly 114: 914–917. Watson, G. N. (1966). A Treatise on the Theory of Bessel Functions. Cambridge, UK: Cambridge University Press. Whitaker, E. T. and Watson, G. N. (1990). A Course on Modern Analysis. Cambridge, UK: Cambridge University Press. Wiener, J. (2000). Integrals of cos2nx and sin2nx. College Mathematics Journal 31: 60–61. Wilde, D. J. and Beightler, C. S. (1967). Foundations of Optimization. Englewood Cliffs, NJ, USA: Prentice Hall. Wilk, M. B. and Gnanadesikan, R. (1968). Probability plotting methods for the analysis of data. Biometrika 55: 1–17. Wilson, W. A. and Tracey, J. I. (1925). Analytic Geometry. Lexington, MA, USA: D. C. Heath. Wishart, J. (1928). The generalised product moment distribution in samples from a normal multivariate population. Biometrika 20: 32–52. Wolsson, K. (1989). A condition equivalent to linear dependence for functions with vanishing Wronskian. Linear Algebra and its Applications 116: 1–8.

983

984

Further Reading

Working, H. and Hotelling, H. (1929). Applications of the theory of error to the interpretation of trends. Journal of American Statistical Association 24: 73–85. Young, W. H. (1908). On the conditions for the reversibility of the order of partial differentiation. Proceedings of the Royal Society of Edinburgh 29: 136–164. Zia, R. K. P., Redish, E. F., and McKay, S. R. (2007). Making sense of the Legendre transform. American Journal of Physics 77: 614–655.

985

Index a

alternating operator Eq. 3.147 altitude 78 amplitude 264, 278 analytical geometry 505, 667 angle right 74, 111 area 73, 104, 105, 111, 113, 117

b base 78 bisector line 8, 79, 250 bissectrix of quadrants see bisector line boundary condition 228

c

calculus 263, 331, 505 centroid 551, 556, 557 circle 511, 533 area 533, Eq. 13.208, 536 radius 536 circular sector 535 area Eq. 13.214 circumcircle 76–78 circumference 511, 514, 520, 521, 525, 527, 529, 569 descriptor Eq. 13.37, 512 parametric equation 535 perimeter 520, Eq. 13.112 radius 512 circumscribed circle see circumcircle condition universal 10 cone 510, 511, 538, 540 axis of symmetry 511

base area Eq. 13.249 generating circle 511 halves 511 nappes see halves radius 540 side area Eq. 13.245 volume 553, Eq. 13.331 conic see conical conical 516 curve 505, 510, 511 section 510, 515 coordinate geometry see analytical geometry coordinates 505 Cartesian see rectangular cylindrical 552, 695, 817, 828 angular position 828 definition 695 radial position 828 representation 695 symmetry 695 origin 511, 514 polar see cylindrical rectangular 552, 695, 817 spherical 705 definition 705 representation 705 symmetry 705 cube 45, 556 area Eq. 13.318 volume Eq. 13.352 curvature 243, 525, 729, 732 arc differential 732 circumference Eq. 13.160 definition Eq. 16.381 Euler’s theorem Eq. 16.434

Mathematics for Enzyme Reaction Kinetics and Reactor Performance, First Edition. F. Xavier Malcata. © 2019 John Wiley & Sons Ltd. Published 2019 by John Wiley & Sons Ltd.

986

Index

curvature (cont’d) line in explicit form Eq. 13.156 line in implicit form Eq. 13.165 orthogonal section 732, 734, 735 principal section see orthogonal section radius curvature 735, Eq. 16.429 vector 729 three-dimensional surface 729 parametric equation 729 curve arc length 518, 527, 536 explicit calculation Eq. 13.90 parametric calculation Eq. 13.91 normal 736 parametric equation 518, 736 plane 517, 736 space 731 tangent 517, 525 cylinder 537, 555 base area Eq. 13.233 side area Eq. 13.230 volume Eq. 13.348

d degrees of freedom 305 derivative 41, 261, 291, 294, 295, 322, 331–338, 358, 359, 362, 374, 375, 414, 420, 474, 614, 835 composite function 303, 321, Eq. 10.205, 324, 347, 851 concavity 299 cosecant 316, Eq. 10.150 cosine Eq. 10.48, 315 cotangent 315, Eq. 10.147 cross see partial second-order curvature of concavity see concavity definition 294, Eq. 10.21 determinant 356 Jacobi’s formula for first-order derivative Eq. 10.454, 357 Jacobi’s formula for second-order derivative Eq. 10.461 directional 307, 308, 368 calculation Eq. 10.101 definition Eq. 10.97 expected value 833 exponential general base Eq. 10.227

natural Eq. 10.169, 324 of function Eq. 10.221, 869 fifth-order 468, 473 first-order see derivative fourth-order 467, 472 higher-order 43, 261, 300, 302, 304, 327, 657 definition Eq. 10.56 hyperbolic cosine Eq. 10.111 sine Eq. 10.109 implicit function Eq. 10.235, 528 second-order Eq. 10.244 integral 414, Eq. 11.295, 777, 810, 876 inverse 656 cosine 319, Eq. 10.185 cotangent 320, Eq. 10.199 function 316, Eq. 10.164 sine 318, Eq. 10.177 tangent Eq. 10.192 Leibnitz’s formulation 291, 295, 327 Leibnitz’s notation see Leibnitz’s formulation Leibnitz’s rule see integral logarithm 294, 322, Eq. 10.212 general base Eq. 10.129 natural Eq. 10.40, 492 logarithmic see logarithm matrix 349 Ax with regard to x Eq. 10.394 Ax{z} with regard to z Eq. 10.397 A–1{ω} with regard to ω Eq. 10.438 A{ω}B{ω} with regard to ω Eq. 10.432 xTAx with regard to x Eq. 10.420 yTAx with regard to x Eq. 10.410 yTAx with regard to y Eq. 10.412 yTAx{z} with regard to z Eq. 10.401 yT{z}Ax{z} with regard to z Eq. 10.426 yT{z}x{z} with regard to z Eq. 10.401 nth-order see higher-order partial 291, 292, 300, 301, 303, 305, 308, 357, 360, 367, 368, 412, 532, 700, 701, 718, 725, 879 change of fixed variable Eq. 10.83 cross see second-order definition Eqs. 10.57, 10.58 exchange of fixed, independent and dependent variables Eq. 10.81 first-order see partial

Index

Schwarz’s theorem Eq. 10.65, 326, 369, 402, 406, 408, 413, 609, 685, 726 second-order Eq. 10.59, 303, 415, 671 vector 673 Young’s theorem see Schwarz’s theorem power function Eq. 10.29 power of function constant as exponent Eq. 10.126 function as exponent Eq. 10.232 product of constant by function Eq. 10.120 product of functions 309, Eq. 10.119, 324, 356, 378, 668, 674, 681, 833, 851 quotient of functions 313, Eq. 10.138, 964 ratio of functions see quotient of functions reciprocal of function Eq. 10.139 secant 316, Eq. 10.153 second-order 299, 307, 325, 326, 329, 357, 369, 404, 406, 413, 421, 467, 472, 527, 528, 614, 615, 640, 643, 656, 657, 686, 725, 731, 799, 830 definition Eqs. 10.51, 10.61, 10.63, 10.64 seventh-order 469, 473 sine Eq. 10.44, 315 sixth-order 468, 473 sum of functions Eq. 10.106, 798 sum of matrices 931 tangent Eq. 10.143, 316, 320, 333 slope 295 theorems 331, 595 Cauchy’s theorem 334, Eq. 10.287, 335–337 Lagrange’s theorem 229, 240, 303, 332, Eq. 10.274, 334, 337, 375, 396, 443, 590, 810 l’Hôpital’s rule 24, 337, 445, 446, 544, 548, 560, 821, 823 for 0/0 at finite point Eq. 10.309 for 0/0 at infinity Eq. 10.315 for 0 ∞ at finite point Eqs. 10.349, 10.353 for 00 at finite point Eqs. 10.376, 10.377 for 1∞ at finite point Eqs. 10.368, 10.372 for ∞–∞ at finite point Eq. 10.361 for ∞/∞ at finite point Eq. 10.341

for ∞0 at finite point Eqs. 10.381, 10.382 Rolle’s theorem 236, 331, Eq. 10.269, 332, 334, 336, 439, 442, 761, 788 third-order 467, 472, 657 total 295, 304 determinant 3, 6, 116, 119, 138, 151, 152, 153, 158–161, 173, 174, 176, 179–183, 187, 193, 214, 361, 758 base sequence 152 block matrix 179, Eqs. 6.154, 6.159 calculation 157 product of eigenvalues Eq. 6.196 cofactor expansion see Laplace’s expansion theorem continuant of tridiagonal matrix Eq. 6.136 definition Eq. 6.3 equal rows Eq. 6.81 exchanges 152, 153 higher-order 6 identity matrix Eq. 6.109, 931 inverse matrix Eq. 6.134 inversion 152 Jacobi’s operation 169, 197 Laplace’s expansion theorem 159, 161–163, 165, 166, 170, 175, 177, 179, 181–183, 186, 187, 189, 197, 198, 356, 685, 690, 692, 709, 757–759 by column Eq. 6.42 by row Eq. 6.41 generalized Eq. 6.166 lower triangular block matrix 179, 181 matrix 171, 180 main diagonal 179 multiple of matrix Eq. 6.72 nil row Eq. 6.63 nil square matrix 181 parity 152, 153 permutations 152 product of matrices Eq. 6.130, 180 proportional rows Eq. 6.95 row added to multiple of other row Eq. 6.99 row multiple of other row Eq. 6.70 Sarrus’ rule 151

987

988

Index

determinant (cont’d) second-order 5, Eq. 1.10, 6, 151, 152, 157, 359, 506, 616, 621, 671, 689, 692, 697, 757, 879, 892, 967 sum of rows Eq. 6.78 swapped rows Eq. 6.91 terms 151, 152 third-order Eq. 6.5, 157, 671, 689, 690, 692, 709 transpose matrix Eq. 6.51 tridiagonal matrix 177, 178 upper triangular matrix Eq. 6.106, 180 differentiable see derivative differential 291, 327, 365, 376, 380, 381, 412, 534, 535, 882 approximate function 291, 293 composite function Eqs. 10.18, 10.20 error estimation 293 higher-order 293 definition Eq. 10.11 integral Eq. 11.21 multivariate 292 definition Eq. 10.7 operator 692, 700, 724 tangent 292 total 292, 731 univariate definition Eq. 10.1 differential equation 185, 228, 261, 559, 597 arbitrary function 597 boundary condition 597 degree 597 homogeneous 597 initial condition 597 integration 597, 893 by separation of variables 426, 427, 599–605, 616, 620, 622, 640, 655, 665, 785 constants 603 linear 597 first-order 597, 598, 600, 614, 640 Bernoulli’s equation canonical form Eq. 15.13 general solution Eq. 15.12 bivariate 598, 602 general form Eq. 15.1

homogeneous equation 601 homogeneous function of same degree canonical form Eq. 15.7 general solution Eq. 15.2 integrating factor 600 linear 600, 622, 650 canonical form Eq. 15.22 general solution Eq. 15.34 nonlinear 598, 604 product of two univariate functions canonical form Eq. 15.3 general solution Eq. 15.4 univariate 598, 600 higher-order 650 constant coefficients 650 canonical form Eq. 15.395 characteristic equation Eq. 15.400 Euler’s equation canonical form Eq. 15.428 characteristic equation Eqs. 15.450, 15.460 differential operator form Eq. 15.447 general solution Eq. 15.455 general solution coincident eigenvalues Eq. 15.427 Taylor’s expansion coefficients Eq. 15.414 distinct eigenvalues Eq. 15.401 linear 650, Eq. 15.394 nth-order see higher-order second-order 597, 602, 632, 648, 704 constant coefficients canonical form Eq. 15.138 characteristic equation Eq. 15.146, 618 dependent (partial) solutions Eq. 15.183 general solution Eq. 15.148 independent (partial) solutions Eq. 15.158 general form Eq. 15.47 linear 613, 650 canonical form Eq. 15.107 dependent variable-free canonical form Eq. 15.186 general solution Eq. 15.192

Index

general solution Eq. 15.108 nonlinear 603, 606 dependent variable-free canonical form Eq. 15.48 general solution Eq. 15.56 independent variable-free 604 homogeneous 972 canonical form Eq. 15.70 general solution Eq. 15.76 nonhomogeneous canonical form Eq. 15.57 general solution Eq. 15.69 particular integral Eqs. 15.110, 15.129, 15.131 polynomial coefficients 622 canonical form Eq. 15.194 Frobenius’ method 622, 643, 889 homogeneous equation 623, 650 Bessel’s equation 632, 638, 648 canonical form Eq. 15.247 coefficient calculation even-degree Eq. 15.286 fourth-degree Eq. 15.280 odd-degree Eq. 15.281 second-degree Eq. 15.266 third-degree Eq. 15.273 general solution Eq. 15.294 indicial equation Eq. 15.253 second independent solution Eq. 15.374 coefficient calculation Eq. 15.210 first-degree arbitrary constant Eqs. 15.218, 15.227, 15.229 second-degree arbitrary constant Eqs. 15.221, 15.222, 15.242, 15.246 third-degree arbitrary constant Eqs. 15.235, 15.238 general solution Eq. 15.196 indicial equation Eqs. 15.213, 15.216, 643 MacLaurin’s method 643, Eq. 15.349, 647 coefficient calculation fourth-degree Eq. 15.371 second-degree Eq. 15.352

third-degree Eq. 15.359 single solution Eq. 15.341, 647 second independent solution 647, Eqs. 15.375, 15.391 variation of constant 613 zero-th term-free 621 set see system system 606 Hartman and Grobman’s theorem 606, 610 linear 609 critical point 610, 612 center point 611, 613 elliptical 613 improper node 611, 613 proper node 611, 612 saddle point 611 spiral point 611, 613 stable 610, 612, 613 star point 611 unstable 610, 612, 613 deviation variables 609 canonical form Eq. 15.95 differential coefficients 609 eigenvalues 610, Eq. 15.106, 612, 613 eigenvectors 612 phase portrait 611, 613 nonlinear 606, 608 critical point 606, 607, 608 center point 608 improper node 608 proper node 608 saddle point 608 spiral point 608 star point 608 deviation variables 607 canonical form Eqs. 15.91, 15.92 equilibrium solution see critical point original variables 606 phase portrait 607, 608 singularity 607 stationary point see critical point nonhomogeneous 597 nonlinear 597 ordinary 597 partial 597, 660, 664, 695 differential equation>