Vectors and Tensors of Physical Fields 9781737462415, 1737462419

338 33 127MB

English Pages [361]

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Vectors and Tensors of Physical Fields
 9781737462415, 1737462419

Citation preview

Vectors and Tensors of Physical Fields

Vectors and Tensors of Robert P. Massé Physical Seventh Edition

Lorem Dolor Facilisis

Ipsum

Copyright © 2021 Robert P. Massé

Also by Robert P. Massé Physics: Nature of Physical Fields and Forces

All rights reserved.

Seventh Edition

Physics: Where It Went Wrong Prime Numbers and Congruence Theory ISBN: 978-1-7374624-1-5 Complex Variables Laplace Transforms

Massé, Robert P. Vectors and Tensors of Physical Fields

i

Dedication This book is dedicated to my beloved wife Kay whose loving and insightful support made it possible

ii

Preface Mathematics is an essential tool in the continuing struggle to understand the physical world.

iii

!

Much of what is now considered applied mathematics was

full use of vector and tensor components and of their

developed in an effort to model the physical fields and forces of

transformations, the relation of vectors and tensors to reality is

our Universe. A prime motivation for the development of

emphasized.

vector and tensor concepts has certainly been the desire to

!

better represent gravitational and electromagnetic fields.

manner designed to make this a comprehensive reference work

!

Physical fields are not merely useful concepts in physics.

on vectors and tensors. While many example problems are

Physical fields exist in reality. In fact, the fundamental forces

provided in this book, rigorous mathematical proofs are

present in our Universe act through such fields. It is these fields

avoided since this is an applied mathematics book. Most of the

that form the foundation for the laws of physics. To understand

sources for this book are given in the references at the end of

the basis of physics then, it is necessary to represent physical

the book.

fields mathematically so that actions of these fields can be

!

The research that forms the foundation of this book could

described and predicted. Vector and tensor analysis techniques

not

have

are powerful mathematical procedures consisting of entities

interlibrary loan programs of Florida made available through

and operations useful for representing properties of physical

the good resources of the Lee County Library System and the

fields.

Tampa Bay Library Consortium. I am very grateful to the

!

The intent of this book is to provide the necessary

personnel at the Bonita Springs and Fort Myers Public

information so that the full power of vector and tensor analysis

Libraries, and to the personnel at the Fort Myers Beach Library

is available for application to the study of physical fields.

for all their diligent efforts in obtaining research material.

The material in this book is organized and presented in a

been

accomplished

without

the

outstanding

Analysis techniques for vectors and tensors are provided in this book. The mathematical knowledge assumed as a prerequisite is that of introductory level calculus. !

!

Robert P. Massé

While several approaches are available for the study of

vector and tensor analysis, the approach taken in this book is the one thought to offer the best physical insight. By making iv

!

First Edition – August 20, 2012

!

Second Edition – August 28, 2014

!

Third Edition – January 14, 2016

!

Fourth Edition – November 21, 2019

!

Fifth Edition – January 20, 2020

!

Sixth Edition – June 1, 2021

!

Seventh Edition – June 19, 2021

Contents 1!

Vector Algebraic Operations

!

1.1!

Physical Fields

!

1.2!

Definition of a Vector

!

1.3!

Coordinate Base Vectors

!

1.4!

Vector Magnitude

!

1.5!

Direction Cosines

!

1.6!

Position Vectors

!

1.7!

Unit Vectors

!

1.8!

Vector Addition

!

1.9!

Product of a Scalar and a Vector

!

1.10! Vector Subtraction

!

1.11! Equal Point Vectors

!

1.12! Indicial Notation

!

1.13! Transformation Laws for Vector Components and for Scalars

!

1.14! Relations among Direction Cosines

!

1.15! Covariant and Contravariant Components of Vectors

!

1.16! Scalar or Dot Product

!

1.17! Vector or Cross Product v

!

1.18! Scalar Triple Product

!

1.19! Vector Triple Product

3!

Vector Field Methods

!

1.20! Reciprocal Vectors

!

3.1!

Divergence

!

1.21! Pseudovectors and Pseudoscalars

!

3.2!

Curl

!

3.3!

Curl and Divergence Identities

Differentiation and Integration of Vector Functions

!

3.4!

Laplacian

!

3.5!

Gauss’s Theorem

!

2.1!

Ordinary Differentiation of Vectors

!

3.6!

Gradient Theorem

!

2.2!

Partial Differentiation of Vectors

!

3.7!

Stokes’s Theorem

!

2.3!

Position Vector Function and Space Curves

!

3.8!

Green’s Theorems

!

2.4!

Unit Tangent Vector

!

3.9!

Vector Field Classifications

!

2.5!

Unit Principal Normal Vector

!

3.10! Helmholtz’s Partition Theorem

!

2.6!

Curvature of a Space Curve

!

3.11! Other Potentials

!

2.7!

Unit Binormal Vector

!

3.12! Conservative Vector Fields

!

2.8!

Frenet-Serret Formulas

!

3.13! Vector Integral Definitions

!

2.9!

Position Vector Derivatives

!

2.10! Arc Length

4!

Vectors in Curvilinear Coordinates

!

2.11! Particle Motion along a Space Curve

!

4.1!

Linear Relations among Vectors

!

2.12! Gradient

!

4.2!

Uniqueness of Vector Components

!

2.13! Vector Line Integrals

!

4.3!

Orthogonalization

!

2.14! Vector Surface Integrals

!

4.4!

Coordinate Systems

!

2.15! Vector Volume Integrals

!

4.5!

Mappings

!

4.6!

Base Vectors

2!

vi

!

4.7!

Reciprocal Base Vectors

!

4.8!

Transformations between Base Vectors

6! Metric Tensors

!

4.9!

Transformations between Reciprocal Base Vectors

!

6.1!

Metric Coefficients and Metric Tensor

!

4.10! Scale Factors

!

6.2!

Reciprocal Metric Tensor

!

4.11! Components of Vectors in Curvilinear Coordinate

!

6.3!

Associated Tensors

!

!

!

6.4!

Reciprocal Base Vectors

!

4.12! Physical Components of Vectors

!

6.5!

Relation between Covariant and Contravariant

!

4.13! Jacobians of Transformations

!

!

Components of a Vector

!

4.14! Jacobians of Inverse Transformations

!

6.6!

Metric Tensors in Orthogonal Curvilinear Coordinate

!

4.15! Derivatives of Jacobians

!

!

Systems

!

6.7!

Transforming Metric Tensors from Orthonormal

5! Tensor Algebraic Operations

!

!

Coordinate Systems

!

5.1!

Tensor Transformation Laws

!

6.8!

Generalized Scalar Product

!

5.2!

Transformation Laws in Matrix Form

!

6.9!

Scale Factors

!

5.3!

Cartesian Tensors

!

6.10! Physical Components of Tensors

!

5.4!

Kronecker Delta

!

6.11! Physical Components of Tensors in Orthogonal

!

5.5!

Tensor Addition and Subtraction

!

!

!

5.6!

Symmetric and Antisymmetric Tensors

!

6.12! Transformation Law for Physical Components of

!

5.7!

Dyadic Tensor Products

!

!

!

5.8!

Quotient Law

!

6.13! Tensor Equations

Systems

Coordinate Systems Vectors

vii

7! Differentiation of Tensor Functions

!

8.9!

!

7.1!

Derivative of a Vector

!

8.10! Covariant Derivative of a Relative Tensor

!

7.2!

Christoffel Symbols

!

8.11! Pseudotensors

!

7.3!

Relations for Christoffel Symbols

!

8.12! Antisymmetric Rank Two Tensor as a pseudovector

!

7.4!

Christoffel Symbols in Orthogonal Curvilinear

!

8.13! Gradient in Curvilinear Coordinate Systems

!

!

Coordinate Systems

!

7.5!

Covariant Derivative of a Tensor

9! Tensor Field Methods

!

7.6!

Ricci’s Tensor

!

9.1!

Divergence in Curvilinear Coordinate Systems

!

7.7!

Absolute Derivative of a Tensor

!

9.2!

Curl in Curvilinear Coordinate Systems

!

9.3!

Laplacian in Curvilinear Coordinate Systems

8! Tensors in Curvilinear Coordinates

!

9.4!

Integral Theorems in Curvilinear Coordinate Systems

!

8.1!

Derivatives of Base Vectors of Curvilinear

!

9.5!

Eigenvectors and Eigenvalues

!

!

Coordinate Systems

!

9.6!

Deviators and Spherical Tensors

!

8.2!

Derivatives of Unit Base Vectors of Orthogonal

!

9.7!

Riemann Tensors

!

!

Curvilinear Coordinate Systems

!

9.8!

Riemann Tensors in Euclidean Coordinate Space

!

8.3!

Permutation Tensor

!

9.9!

Higher Order Covariant Derivatives for Rank Two

!

8.4!

Reciprocal Permutation Tensor

!

!

Tensors

!

8.5!

Permutation Tensor Relations

!

9.10! The Ricci Tensor

!

8.6!

Vector Product in Curvilinear Coordinate Systems

!

9.11! The Einstein Tensor

!

8.7!

Area and Volume Elements in Curvilinear

!

9.12! Isotropic Tensors

!

!

Coordinate Systems

!

8.8!

Area and Volume Elements in Orthogonal

!

!

Curvilinear Coordinate Systems

Relative Tensors

viii

Appendix A! The Greek Alphabet Appendix B! Vector Field Operations !

B.1! Source/Sink Points and Field Points

!

B.2! Dirac Delta Function

!

B.3! Solution to Poisson’s Equation

!

B.4! Uniqueness of the Vector Field

!

B.5! Green’s Formula

Appendix C! Operations in Orthogonal Curvilinear ! Coordinates !

C.1! Differential Operations in Orthogonal Curvilinear

!

!

!

C.2! Differential Operations in Rectangular Coordinate

!

!

!

C.3! Differential Operations in Cylindrical Coordinate

!

!

!

C.4! Differential Operations in Spherical Coordinate

!

!

Coordinate Systems Systems Systems Systems

Appendix D! Vector Identities References ix

Chapter 1 Vector Algebraic Operations

! ! A × B = ε i j k ai bj eˆk

10

!

Physical fields exist in reality, and their existence and

vector quantity, and can be represented mathematically by a

properties are independent of any mathematical coordinate

vector. Since the properties of physical fields can vary from

system that may be used to describe them. The mathematics

point to point in the field (see Section 1.1), the vector

used to express the physical fields of our Universe must,

appropriate for representing physical field vector quantities is

therefore, also be independent of any coordinate system.

the point vector. It always occupies and is defined for only a

Mathematical entities known as tensors are defined so as to be

single point. Point vectors are tensors of rank one, and are

unaltered by any coordinate system transformation. The

invariant to any coordinate system change.

invariant of tensors to any coordinate system transformation,

!

makes them ideal for representing physical fields.

directions at the same time must be defined by specifying both

!

Depending upon the nature of a physical field, it may

its magnitude and the directions in which it is acting. Such an

have no direction associated with it, or it may have one or more

attribute is called a tensor quantity, and can be represented

directions associated with it. When a tensor is representing a

mathematically by a higher rank tensor. All higher rank

physical field, the rank of the tensor corresponds to the number

tensors are also invariant to any coordinate system change.

of directions associated with each field particle. All tensors are

!

classified according to their rank (also called order or valence).

magnitude and a direction, but are dependent on some given

!

coordinate system, and so are not tensors. Line vectors can be

A physical field particle attribute that can be defined

completely by its magnitude is called a scalar quantity, and can be represented mathematically by a scalar. A scalar has no direction associated with it, and can be specified by a single

!

direction must be defined by specifying both its magnitude and

The relative positions of the coordinate axes of a given reference coordinate system. Such vectors are known as base vectors.

any coordinate system change. A physical field particle attribute that acts in a single

There also exist vectors known as line vectors that have a

used to specify the following:

number. Scalars are tensors of rank zero, and are invariant to !

A physical field particle attribute that acts in two or more

!

The position of any given point in a specified coordinate system. Such a vector is known as a position vector.

the direction in which it is acting. Such an attribute is called a 11

!

!

The components of a point vector relative to a

lines of force that Michael Faraday had proposed to explain

rectangular coordinate system.

magnetic force patterns in space (Faraday, 1844, 1847, 1855).

Line vectors always occupy more than a single point, and

so generally cannot represent physical field quantities. Line vectors that can be slid over many points without changing direction or magnitude are known as sliding vectors or localized vectors. Vectors that are attached to one point are known as bound vectors or tied vectors. Since a point vector is always attached to a single point, all point vectors (rank one tensors) are bound vectors. !

Point vectors follow algebraic analysis laws that are

different from those followed by scalars. In this chapter the components and algebraic operations of point vectors will be presented and explained. First, however, we will provide a brief discussion of physical fields to motivate our study of point vectors.

1.1! !

!

A magnetic line of force is a line having the direction of the

magnetic force at each point along the line. The concentration of such lines in a given region indicates the intensity of the force in the region. Faraday surmised that these lines of force exert an influence upon each other by transferring their action from particle to contiguous particle of a physical medium. !

Considering these lines of force, Maxwell (1861a) stated

"we cannot help thinking that in every place where we find these lines of force, some physical state or action must exist in sufficient energy to produce the actual phenomena." Maxwell proceeded to "examine magnetic phenomena from a mechanical point of view" based upon an interpretation of Faraday's concept of a physical medium of contiguous particles. !

Contiguous particles are simply particles packed so

closely together that there is essentially no space between them.

PHYSICAL FIELDS The concept of a physical field was developed first by

Euler in the 1750s during his studies of the kinematics of fluids (see Truesdell, 1954). It was James Clerk Maxwell, however, who more generally introduced the field concept into physics in the 1860s. Maxwell derived his field concept from the magnetic

Faraday did not consider the contiguous particles of his physical medium to be matter. He noted that “matter is not essential to the physical lines of magnetic force any more than to a ray of light or heat.” Faraday employed the terms magnetic field to denote the distribution of magnetic forces in a region (see Gooding, 1980). Faraday’s ideas were used by Maxwell to formulate the concept of a continuous physical field in which 12

all actions occur only between contiguous particles. We will

continuum and geometrical points assigned to the field

designate these contiguous particles as field particles.

particles in the region of space occupied by the field medium.

!

A disturbance resulting from a change in the motion of a

The mathematical continuum of abstracted physical particles

field particle can then propagate to a distance only by a

provides justification for using the limiting process of the

succession of actions between contiguous particles. Action-at-a-

calculus on physical problems involving fields, and for using

distance is thereby reduced to actions between particles that are

differential equations to describe changes in the fields.

contiguous. Some time duration is required for all these actions

Each point particle of the continuum always retains its

to occur, and so field disturbances can only propagate with a

individual identity as well as all the physical properties

velocity that is finite.

(including the kinematical and dynamical properties) of its

!

We see then that the existence of a continuous physical

corresponding field particle except for volume and extension (a

field in a region requires the existence of a real physical

point particle has neither). Nevertheless the same density and

medium consisting of contiguous particles. We will designate

pressure are assumed to exist in the continuum as in the field

this real physical medium as the field medium. The particles of

medium.

the field medium must be small enough so that a large number

!

of these field particles are contained within a volume element

capable of being abstracted to form a mathematical continuum,

that is considered infinitesimal relative to the volume of the

this is not what defines a physical field. Rather, a physical field

entire region.

is some specific attribute associated with each of the field

While a physical field requires contiguous field particles

These contiguous field particles can then be used to form a

particles being abstracted to form the mathematical continuum.

continuous mathematical entity known as a continuum. This

A physical field is not a physical medium then, but is some

continuum is obtained by abstracting the field particles of the

attribute of the contiguous field particles that form the real

field medium to point particles, thereby allowing the field

physical medium. A physical field can never have an existence

medium to be treated as a mathematical continuum filling a

independent of the field particles that constitute the field

region of space with no gaps between the points. The result is a

medium. A physical field is more, therefore, than simply a

one-to-one correspondence between point particles of the 13

mathematical device used to solve a physical problem. Physical

!

A field that does not vary with time (is independent of

fields are always real.

time) is termed a stationary field or a steady-state field. A field that does vary with time is termed non-stationary.

1.1.1! !

TYPES OF PHYSICAL FIELDS

Depending upon properties of the field particle attribute

1.1.2!

VECTOR FIELD SINKS AND SOURCES

that is the physical field, several different types of physical

!

A physical vector field consists of some vector attribute

fields can be identified. When the field particle attribute is a

associated with the field particles of a real field medium as was

scalar quantity so that a scalar quantity is associated with each

noted in Section 1.1. If within the region of the vector field there

point of the mathematical continuum, these quantities together

exists an entity that produces a discontinuity in the particular

form a scalar field. Similarly, when the field particle attribute is

particle attribute that is the vector field, then such an entity is

a vector quantity or a higher rank tensor quantity so that a

known as a source or a sink for the field. Only vector fields

vector quantity or a higher rank tensor quantity is associated

have field sources and sinks.

with each point of the mathematical continuum, then these

!

quantities form a vector field or a tensor field, respectively. In

vector field is a kinematic flow property (flow velocity or flow

this book we will generally use the term tensor to refer to

acceleration) of the field medium, sources and sinks of the

tensors of rank two or higher. As noted previously, vectors are

vector field can be interpreted as places where this real physical

tensors of rank one.

field medium is being continuously created and continuously

!

The temperature in the Earth’s atmosphere is an example

destroyed, respectively (see Section 3.1.1). For such cases we

of a physical scalar field, and the wind velocity in the

will designate any source as a field flow source and any sink as

atmosphere is an example of a physical vector field. The real

a field flow sink. When the rate at which a source continuously

medium that is abstracted to a mathematical continuum in both

creates a field medium (or a sink continuously destroys a field

these cases is air, and the field particles are extremely small air

medium) is constant, the source (or sink) is referred to as

parcels.

steady.

For the special case when the attribute that constitutes the

14

!

If the vector field does not consist of a kinematic flow

attribute of the field medium, then the source and sink will not

1.2!

DEFINITION OF A VECTOR

represent places where the field medium is, respectively,

!

created and destroyed. That is, they will not represent a field

change in the coordinate system chosen to represent it. A

flow source or a field flow sink. Rather, both source and sink

quantity that has magnitude and direction can be represented

will produce the attribute constituting the vector field. The only

by a vector. A point vector is a vector that is bound to and

constraint is: when a source and a sink of this type and of equal

defined for only a single point of a physical field. Although a

strength come together, they can nullify each other. We will

point vector specifies a magnitude and a direction, it does not

designate this type of source as a field nonflow source and the

have length.

A vector is a mathematical entity that is invariant to any

corresponding sink as a field nonflow sink. Since a field nonflow sink is really just another kind of source, we will also

1.2.1!

designate a field nonflow source to be a field positive source

!

and a field nonflow sink to be a field negative source

coordinate system by a set of three functions known as vector

(recognizing that nonflow sources and sinks can nullify each

components. While the vector components are coordinate

other).

system dependent, they are such that the vector they define is

!

coordinate system independent.

We can summarize by noting that there exist two types of

VECTOR COMPONENTS

Point vectors are specified relative to any particular

vector field sources and, correspondingly, two types of vector

!

field sinks:

projections of the magnitude of the vector along the axes of

1.! Field flow sources and field flow sinks. 2.! Field nonflow sources and field nonflow sinks (field positive sources and field negative sources).

The components of a point vector are defined to be the

the coordinate system being used. If the components of a point vector are known for any particular coordinate system, they can be determined for any other coordinate system. !

Changing the coordinate system used to specify vector

components will generally result in a change in the vector components. By definition of a point vector, this change in 15

components must always be such so as to leave the magnitude

!

and direction of the vector itself unchanged. To ensure that

!

vectors are invariant to any coordinate system change, the

! ax = ( magnitude of A ) cosθx ! ay = ( magnitude of A ) cosθ y ! ! az = ( magnitude of A ) cosθz

(1.2-1)

coordinate system transformation law for vector components

!

must have a specific form. A mathematical entity then can be identified as a vector simply from the transformation law its

where ax , ay , and az can be seen to be the projections of the  magnitude of the vector A onto the rectangular coordinate axes

components follow. This provides an algebraic method for

x , y , and z , respectively.

verifying that a mathematical entity is a vector. As we will see in later chapters, similar verification methods exist for all tensors.

1.2.2!

VECTOR COMPONENTS IN RECTANGULAR COORDINATE SYSTEMS

! The magnitude of a point vector A can be resolved into rectangular components ax , ay , and az along the directions of

!

the rectangular coordinate axes x , y , and z , respectively. These axes are, of course, mutually perpendicular. The rectangular components ax , ay , and az are then obtained by forming the ! products of the magnitude of A and the cosines of the angles ! θx , θ y , and θ z between the direction of A and the directions of the positive x , y , and z axes, respectively. These cosines are ! termed direction cosines for the vector A . The rectangular components are given by:

! Figure 1-1! Representation of the magnitude of a vector A and its rectangular components ax , ay , and az . 16

! !

 A vector A can be denoted by listing its components: ! A = ax , ay , az ! (1.2-2)

discussed in Chapter 4. For now we simply note that any vector can be expressed as a linear combination of these base vectors.

To represent graphically the projection of the magnitude of a ! vector A on a set of axes of a rectangular coordinate system, a line vector having the same magnitude and direction as the point vector can be used. The tail of the line vector is placed at the coordinate system origin. The magnitude of the point vector ! A is then taken to be the length of the corresponding line vector (see Figure 1-1).

1.3! !

COORDINATE BASE VECTORS Coordinate base vectors are a set of vectors used to define

a given coordinate system. They are said to form a basis for the coordinate system. A base vector of a coordinate axis is always oriented in the positive direction (the direction of coordinate increase) along the coordinate axis. Base vectors are obviously dependent on the orientation of the coordinate system.

Figure 1-2!

Therefore base vectors are not point vectors since they are not

! Representation of a vector A and the base vectors iˆ , ˆj , and kˆ (short black arrows).

invariant, but change when the reference coordinate system is

!

changed. In 3-dimensional space, a basis will always consist of

oriented in a positive direction along the rectangular coordinate

three base vectors that are linearly independent. The concepts

axes x , y , and z , respectively, as shown in Figure 1-2. These

of dependence and independence for a set of vectors will be

vectors are tangent to the rectangular coordinate axes, and so

We will now define iˆ , ˆj , and kˆ to be vectors that are

they are linearly independent and are a set of base vectors for 17

 The vector A given in equation (1.3-2) can be written in

the rectangular coordinate system. In component form these

!

base vectors are:

matrix form:

!

iˆ = 1, 0, 0 !

ˆj = 0, 1, 0 !

kˆ = 0, 0, 1 !

(1.3-1)

As we will see later (Section 1.7), the base vectors iˆ , ˆj , and kˆ are all unit vectors (they have unit magnitude). These base vectors are also referred to as a unit orthogonal triad for the rectangular coordinate system. Since the base vectors iˆ , ˆj , and kˆ are all linearly ! independent, a vector A can always be expressed as a linear !

combination of these base vectors: ! ! A = ax iˆ + ay ˆj + ak kˆ !

⎤ ⎥ ⎥ ⎥ ⎥⎦

T

⎡ ⎢ ⎢ ⎢ ⎢⎣

iˆ ⎤ ⎥ ˆj ⎥ = ⎡ a ⎢⎣ x ˆk ⎥ ⎥⎦

ay

⎡ ⎢ az ⎤ ⎢ ⎥⎦ ⎢ ⎢⎣

iˆ ⎤ ⎥ ˆj ⎥ ! ⎥ kˆ ⎥⎦

(1.3-3)

where the superscript T indicates the transpose of the matrix.

1.4!

VECTOR MAGNITUDE

! (1.3-2)

where the coefficients of the unit base vectors are the ! components of the vector A . ! In equation (1.3-2) the term ax iˆ represents a rectangular ! coordinate system component vector of A directed along the x axis and having a magnitude of ax . The terms ay ˆj and az kˆ ! represent similar component vectors of A directed along the y and z axis, respectively. Equation (1.3-2) demonstrates the very important decomposition of a general vector into coordinate component vectors. These component vectors are dependent on the orientation of the coordinate system. Therefore they are not point vectors.

!

⎡ a x ! ⎢ A = ⎢ ay ⎢ ⎢⎣ az

The magnitude of a vector is considered to be its absolute ! value. We will represent the magnitude of a vector A using the ! notation A or A . The magnitude is sometimes referred to as the modulus or norm of the vector. Since the rectangular components of a vector are defined to be projections of the magnitude of the vector in the directions of the coordinate axes (see Section 1.2), we can use these components to obtain the vector magnitude. From Figure 1-2 and repeated application of the Pythagorean theorem, we see that the magnitude of the ! vector A is given by: !

! A =A=

( ax )2 + ( ay ) + ( az )2 ! 2

(1.4-1)

The magnitude of a vector is always taken as non-negative: ! A ≥0! ! (1.4-2) 18

We will show later (see Example 1-6 in Section 1.14.1) that the

From equations (1.4-1) and (1.5-1) we also have:

magnitude of a point vector is a scalar.

! A 2 = ( ax ) + ay

!

2

For line vectors, the magnitude of a vector is often referred

to as the length of the vector. Since, as noted previously, a point

and so:

vector has no extension, point vectors cannot be considered to

!

have length, although they do have magnitude. !

If the magnitude of a vector is equal to zero, the vector is

defined to be a zero vector. From equation (1.4-1) we see that if ! a vector A is a zero vector: ! ! A = 0! ! (1.4-3) ! then A must have all three components equal to zero: ! ! ! A = 0 = 0, 0, 0 ! (1.4-4) The components of a zero vector are the same in all coordinate systems, and so these components are scalars. !

A zero vector has no determined direction. The direction

of a zero vector is arbitrarily assigned to be the one most

cosθx =

ax A

!

ay A

!

cosθz =

(1.5-3)

POSITION VECTORS

! We will define a position vector r of a point P to be a

the given coordinate system being used, position vectors are coordinate system dependent, and so are not point vectors. !

For a rectangular coordinate system, a position vector for a

point P : 1.! can be represented by a line vector extending from the

coordinate system origin to the point P ( x, y, z ) as shown in Figure 1-3.

2.! will have components numerically equal to the

k

cosθy =

)

Since position vectors specify the position of a point relative to

Using equations (1.2-1) and (1.4-1), the direction cosines ! for the vector A = a iˆ + a ˆj + a kˆ can be written as: !

(

= A 2 cos 2θ x + cos 2θ y + cos 2θz ! (1.5-2)

that of a line from the coordinate system origin to the point P .

!

y

2

coordinate system origin to the point P , and whose direction is

DIRECTION COSINES x

z

vector whose magnitude equals the radial distance from the

convenient for the given equation being considered.

1.5!

2

cos 2θx + cos 2θ y + cos 2θ z = 1 !

1.6! !

( ) + (a )

az A

!

(1.5-1)

!

coordinates ( x, y, z ) of the point P : ! r = x iˆ + y ˆj + z kˆ !

(1.6-1) 19

3. will uniquely determine the location of the point P relative to the coordinate system. !

For curvilinear coordinate systems, the coordinates of a

point are not the components of the corresponding position vector (see Section 4.11.2). The location of a point is generally not uniquely determined relative to a curvilinear coordinate system by its position vector.

! The magnitude of a position vector r in rectangular

!

coordinates is given by: ! r =r=

!

( x )2 + ( y )2 + ( z )2 !

(1.6-2)

Using equation (1.5-1) the direction cosines of a position vector ! r in a rectangular coordinate system can be written as:

cosθx =

!

1.7! !

x ! r

cosθy =

y ! r

z cosθz = ! r

(1.6-3)

UNIT VECTORS By definition unit vectors are vectors having unit

magnitude and no physical dimensions (such as dimensions of force). From equations (1.3-1) and (1.4-1) it is obvious that the rectangular coordinate base vectors iˆ , ˆj , and kˆ are all unit vectors. !

Figure 1-3!

! The line position vector r in a rectangular coordinate system for a point P ( x, y, z ) .

Since the main purpose of unit vectors is to provide

direction information, unit vectors are sometimes called ! direction vectors. If A is a unit vector, it is denoted by Aˆ . A ! unit vector Aˆ can always be obtained from a nonzero vector A ! ! by dividing A by its magnitude A : ! ! A A ˆ A= ! = ! ! (1.7-1) A A since we then have: 20

!

! A A Aˆ = ! = = 1 ! A A

!

(1.7-2)

From equation (1.5-1) we see that the rectangular

components of a unit vector are simply its direction cosines. Therefore, we can write the unit vector Aˆ in component form as:

Aˆ = cosθx iˆ + cosθ y ˆj + cosθz kˆ !

!

(1.7-3)

From equation (1.7-1) we also see that the direction of a unit ! vector Aˆ is the same as that of A (although its magnitude may be different). Using equation (1.7-1), we can represent any ! nonzero vector A as: ! ! A = A Aˆ ! (1.7-4) where A is a scalar magnitude multiplying the unit vector Aˆ ! (see Section 1.10). In equation (1.7-4) the vector A is written in terms of a product of two factors: one factor provides ! information about the magnitude of A , and the other factor ! provides information about the direction of A .

of different vector fields representing the same type of physical entity is always valid. !

direction associated with them, vector addition does not follow the same rules as scalar addition. The sum of two point vectors ! ! A and B in component form is defined to be: ! ! A + B = ax iˆ + ay ˆj + az kˆ + bx iˆ + by ˆj + bz kˆ !

(

!

!

VECTOR ADDITION Point vectors can be added to each other only if they are

attached to the same point. Since the addition of two or more point vectors at any given point is possible, the superposition

) ( (

)

)

= ( ax + bx ) iˆ + ay + by ˆj + ( az + bz ) kˆ ! (1.8-1)

In the algebraic addition of vectors, the components of the individual vectors are added for each coordinate direction to produce the components of the sum vector. ! ! ! ! ! !

1.8!

Because vectors have not only a magnitude but also a

The addition of vectors is commutative: ! ! ! ! A+ B= B+ A!

(1.8-2)

! ! ! ! ! ! ! ! ! ! ! ! A+ B+C = A+C+ B = B+ A+C = B+C+ A       = C + A + B = C + B + A ! (1.8-3) The addition of vectors is also associative: ! ! ! ! ! ! ( A + B) + C = A + ( B + C )!

(1.8-4)

Therefore vectors can be added in any order and with any grouping. We also have: ! ! ! ! ! ! A+0 = 0+ A= A!

(1.8-5) 21

Equation (1.8-5) demonstrates the existence of an additive identity for vectors. Example 1-1 Verify the commutative law of addition for the two vectors: ! ! A = ax iˆ + ay ˆj + az kˆ and B = bx iˆ + by ˆj + bz kˆ . Solution: ! ! !A + B = ( ax + bx ) iˆ + ay + by ˆj + ( az + bz ) kˆ ! !

( ) ! ! = ( b + a ) iˆ + ( b + a ) ˆj + ( b + a ) kˆ = B + A x

x

y

y

z

z

If a physical quantity has a magnitude and a direction

associated with it, but does not satisfy the vector addition laws, then this physical quantity cannot be represented by a vector. This is the case for finite rotations of a rigid body (the rotations

Figure 1-4!

Finite rotations of a solid body.

are not commutative). This is illustrated in Figure 1-4. In the top row, the body on the left side is first rotated 90 degrees about

1.9!

the vertical axis and then 90 degrees about a horizontal axis. In the bottom row, the body on the left side is first rotated 90 degrees about a horizontal axis and then 90 degrees about the vertical axis. The final results on the right side for the two rows are obviously not equal, and so finite rotations of a rigid body are not commutative.

PRODUCT OF A SCALAR AND A VECTOR

Let both λ and µ be scalars. The product of a vector ! A = ax iˆ + ay ˆj + az kˆ and a scalar λ is given by: ! ! λ A = λ ax iˆ + λ ay ˆj + λ az kˆ ! (1.9-1) !

which is understood to mean:

22

!

! λ A = ( λ ax ) iˆ + λ ay ˆj + ( λ az ) kˆ !

(

)

(1.9-2)

or in component form: ! ! λ A = λ ax , λ ay , λ az !

(1.9-3)

! The magnitude λ A is given by: ! λ A = λ A! !

(1.9-4)

and so multiplication of a vector by a scalar results in a vector magnitude that is scaled by the scalar factor. Since all components of the vector are multiplied by the same scalar, multiplication of a vector by a scalar does not change the direction of the vector, only the magnitude. ! ! ! ! ! ! !

Multiplication by a scalar is associative for vectors: ! ! ! (1.9-5) (λ µ ) A = λ ( µ A) = λ µ A ! Multiplication by a scalar is commutative for vectors: ! ! λ A = Aλ ! (1.9-6) Multiplication by a scalar is distributive for vectors: ! ! ! ! (1.9-7) λ ( A + B) = λ A + λ B !

!

!

!

(λ + µ ) A = λ A + µ A !

(1.9-8)

! ! B = λ A!

!

(1.9-9)

! ! then the vectors A and B are parallel. If λ = 1 we have: ! ! ! ! B = (1) A = A ! (1.9-10) ! ! and the vectors A and B are equal. The equation: ! ! ! ( 1) A = A !

(1.9-11)

demonstrates the existence of a multiplicative identity for vectors by a scalar. ! ! ! ! If λ < 0 and B = λ A , the direction of B will be reversed ! ! ! from that of A . The vectors A and B will still be parallel, but will have opposite directions (they will be antiparallel). If ! λ = −1 then vector B will be the negative of the vector ! ! ! A = a iˆ + a ˆj + a kˆ ( B can then be written as − A ): x

! !

y

z

! ! ! B = ( −1) A = − A = − ax iˆ + − ay ˆj + − az kˆ !

(1.9-12)

Therefore the negative of a vector is another vector having

the same magnitude as the original vector, but having the opposite direction. We see that a vector can be considered to be negative only relative to the same vector not multiplied by a -1. All nonzero vectors have a positive magnitude, and so all nonzero vectors can be considered to be inherently positive.

! If we have a vector B such that: 23

1.10! !

Example 1-2

VECTOR SUBTRACTION

Evaluate the following vector operations for the vectors ! ! A = 5 iˆ + 7 ˆj + 2 kˆ and B = iˆ − 3 ˆj + 4 kˆ :

Any two point vectors can be subtracted from each other

only if they are attached to the same point. The difference ! ! between any two point vectors A and B attached to the same point is defined by the equation: ! ! ! ! A − B = A + (− B) ! !

! ! A+ B ! ! b.! A − B ! c.! 5 A

a.!

(1.10-1)

! ! where subtraction of the vector B from A is accomplished by ! ! adding − B to A . Subtraction of a point vector is defined,

Solution: ! ! a.! A + B = ( 5 +1) iˆ + ( 7 − 3) ˆj + ( 2 + 4 ) kˆ = 6 iˆ + 4 ˆj + 6 kˆ

therefore, as adding the negative of the vector to be subtracted.

! ! b.! A − B = ( 5 − 1) iˆ + ( 7 + 3 ) ˆj + ( 2 − 4 ) kˆ = 4 iˆ + 10 ˆj − 2 kˆ

In component form we have: ! ! ! A − B = ( ax + [ − bx ]) iˆ + ay + ⎡⎣ − by ⎤⎦ ˆj + az + ⎡⎣ − bz ⎤⎦ kˆ ! (1.10-2)

(

) (

)

and so: !

! ! A − B = ( ax − bx ) iˆ + ay − by ˆj + ( az − bz ) kˆ !

(

)

c.!

(

)

! 5 A = 5 5 iˆ + 7 ˆj + 2 kˆ = 25 iˆ + 35 ˆj + 10 kˆ .

1.11!

EQUAL POINT VECTORS

! ! Two point vectors A and B : ! A = ax iˆ + ay ˆj + az kˆ !

(1.10-3)

!

We also have: ! ! ! ! A − A = ( ax − ax ) iˆ + ay − ay ˆj + ( az − az ) kˆ = 0 ! (1.10-4)

!

or

which demonstrates the existence of an additive inverse for

! can be equal only if they are attached to the same point. If A ! and B are equal, we can then write: ! ! A = B! ! (1.11-3)

vectors.

or

(

!

! ! ! A + (− A) = 0 !

)

(1.10-5)

!

! B = bx iˆ + by ˆj + bz kˆ !

(1.11-1) (1.11-2)

24

!

! ! ! A − B = 0!

(1.11-4)

From equation (1.4-4) we must then have: !

ax − bx = 0 !

ay − by = 0 !

az − bz = 0 !

(1.11-5)

!

INDICIAL NOTATION

A very convenient shorthand notation for vectors and

tensors is indicial notation or tensor notation. When indicial notation is employed, all coordinate directions in 3-dimensional

or !

1.12!

ax = bx !

ay = by !

az = bz !

(1.11-6)

Therefore two point vectors are equal if they are attached to the

space are expressed using the indices 1, 2, and 3. For the

rectangular coordinate system, the coordinates ( x, y, z ) become

( x1, x2 , x3 ) .

same point and if their corresponding components (using the

!

same coordinate system and measured in the same system of

the coordinate axes, vector components, and base vectors all

units) are equal. The conditions given in equation (1.11-6) are ! ! equivalent to the definition that two point vectors A and B are

designated using indicial notation. We can see from this figure that the unit base vectors iˆ , ˆj , and kˆ of the rectangular

equal if and only if they are attached to the same point, have

coordinate system become eˆi (with index i = 1, 2, and 3) using

the same magnitude, and are pointing in the same direction.

indicial notation.

!

!

Two point vectors are considered to be collinear if they are

Figure 1-5 shows the rectangular coordinate system with

A subscript on a base vector for a rectangular coordinate

attached to the same point and are directed either in the same

system does not indicate that the vector itself is a component.

direction or reverse directions. For point vectors, therefore, the

The subscript on a base vector is simply a label meaning that

term collinear is not used if the vectors are not attached to the

the base vector pertains to the corresponding coordinate axis.

same point. Remember point vectors are bound vectors.

Each such base vector still has three components two of which are zero. In the case of the unit base vectors eˆi of the rectangular

!

A vector equation implies the existence of three algebraic   equations. For example, the equation A = B implies the three ! !  equations given in equation (1.11-6). Finally, if A , B , and C are ! ! ! ! ! ! vectors, and if A = B and A = C , then we must also have B = C .

coordinate system, two of the three components will be zero and the remaining component will be one: !

eˆ1 = 1, 0, 0 !

eˆ2 = 0, 1, 0 !

eˆ3 = 0, 0, 1 ! (1.12-1) 25

The nonzero component corresponds to the axis along which the base vector is directed. Equations (1.12-1) are equivalent to equations (1.3-1).

!

! A=

3

∑ a eˆ !

(1.12-3)

i i

i=1

Note that indices of base vectors are treated in the same way as indices on vector components when it comes to summation operations, even though the base vectors are not components. The base vectors themselves are being summed.

1.12.1! INDICES AND EXPONENTS !

Since we will be using superscripts as both indices and

exponents in the vector and tensor equations of this book, it is important that these indices and exponents not be confused with each other. Therefore we will use parentheses (or brackets) around a term to indicate that its superscript is an exponent and not an index. For example, equation (1.4-1) in indicial notation becomes: ! Figure 1-5!

! !

Indicial notation is used in this figure. In all other respects this figure is identical to Figure 1-2.

Writing equation (1.3-2) in indicial notation, we obtain: ! A = a1 eˆ1 + a2 eˆ2 + a3 eˆ3 ! (1.12-2)

! A =A=

( a1 ) + ( a2 ) + ( a3 ) 2

2

3

2

=

∑( a ) i

2

!

(1.12-4)

i=1

In this equation, the subscript of a is an index and the superscript of a is an exponent. The only exception that we will make to this practice of using parentheses around terms having exponents will be for certain operators and functions (such as

or 26

trigonometric functions) whose meaning is clear without the

!

use of parentheses.

!

(1.12-7)

We will specifically note any case where the Einstein

summation convention for indices appearing twice in a term

1.12.2! SUMMATION CONVENTION !

ai j bj k ckl = bj k ai j ckl = ai j ckl bj k !

cannot apply; otherwise such occurrences will indicate

We now introduce the Einstein summation convention in

summation over all values of the range of the indices. If a given

which an index that appears exactly twice in any term of an

index appears more than twice in the same term, the

equation (as either a subscript or a superscript) is understood to

summation convention must not be used. Any free index that

be summed over all values of its range. Such an index is

appears in one term of a vector equation must, of course,

considered to be a dummy index, summation index, or bound

appear in all terms of the equation.

index since it can be replaced by any other index (not already

!

appearing in the term) without changing the value of the term.

notation to represent all subscript and superscript indices as

Dummy indices always disappear in the summation process. An index that is not a dummy index is termed a free index. !

Using the Einstein summation convention equation

(1.12-4) becomes: !

Many equations can be condensed by using indicial

index variables. For example, the coordinates ( x1, x2 , x3 ) can be written as xi , with i taking the values 1, 2, and 3. If the Einstein summation convention is also used, the form of some equations is greatly simplified.

A = ai ai = a1 a1 + a2 a2 + a3 a3 !

(1.12-5)

where the index i is a dummy index that assumes the values 1,

Example 1-3 Interpret the superscripts i and j in the following terms:

2, and 3 during the summation. Equation (1.12-5) can also be a.!

written as: !

A = ai a i =

aj aj !

(1.12-6)

The order of factors in an equation term does not matter when indicial notation is used. For example, we can write:

(a )

i j

b.! sin j a i c.! a i j Solution: 27

a.! The superscript i is an index and the superscript j is an exponent. b.! The superscript i is an index and the superscript j is an

(1.13-1) and (1.13-2) represent rotation, and the γ i terms represent translation between the xj and xi′ coordinate systems.

(

)

the xj and xi′ axes so that θi j = ∠ xj , xi′ .

c.! The superscripts i and j are both indices.

!

The terms containing the α i j coefficients in equations

The α i j coefficients are just the direction cosines cosθ i j between

exponent.

1.13!

!

TRANSFORMATION LAWS FOR VECTOR COMPONENTS AND FOR SCALARS

A change from one rectangular coordinate system

( x1, x2 , x3 )

to another rectangular coordinate system ( x1′, x2′ , x3′ )

can involve both a translation of axes and a rotation of axes. Therefore, one rectangular coordinate system

( x1′, x2′ , x3′ )

will

generally be related to another rectangular coordinate system

( x1, x2 , x3 ) by the linear transformation equations:

!

x1′ = α 11 x1 + α 12 x2 + α 13 x3 + γ 1

!

x2′ = α 21 x1 + α 22 x2 + α 23 x3 + γ 2 !

!

x3′ = α 31 x1 + α 32 x2 + α 33 x3 + γ 3

(1.13-1) Figure 1-6!

or !

xi′ = α i j xj + γ i !

(1.13-2) !

Two rectangular coordinate systems. Both systems have the same origin, but the primed system is rotated with respect to the unprimed system.

Since θi j = − θ j′i (see Figure 1-6), we have: 28

(

)

cosθ i j = cos − θ j′i = cosθ j′i !

!

(1.13-3)

and so:

!

α i j = cosθi j = cosθ j′i = α j′i !

!

(1.13-4)

and

α j i = α i′j !

! !

(1.13-5)

(1.13-6)

rectangular coordinates to another is given by equation (1.13-6) ! for the vector components dxj of dr , and by equation (1.13-2) with equation (1.13-2), we see that the rectangular components ! of the vector dr are not affected by a translation of the rotation (as in Figure 1-6) in discussing the transformation of ! the vector components of dr from one system of rectangular coordinates to another.

and so: !

!

αij =

∂ xi′ ! ∂xj

The transformation law for a change from one system of

coordinate axes. Therefore it is necessary to consider only

equations for the coordinate differential dxj : !

(1.13-8)

for the coordinates themselves. Comparing equation (1.13-6)

From equation (1.13-2) we can obtain the transformation ∂ x′ dxi′ = α i j dxj = i dxj ! ∂xj

! dr = dx1 eˆ1 + dx2 eˆ2 + dx3 eˆ3 = dxj eˆj !

!

(1.13-7)

!

As we will see later (in Section 2.3), dxj are the ! components of the vector differential dr of a line position ! ! vector r . The line position vector r depends on the location of ! the origin of the coordinate system, but the differential dr does ! not; dr represents an infinitesimal displacement from one position to another. This is a physical quantity that does not ! depend upon any coordinate system, and so dr is a point ! ! vector, even though r is a line position vector. We can write dr as:

The transformation equations that are inverse to those

given in equation (1.13-6) are: dxj = α j′i dxi′ =

!

∂xj ∂ xi′

dxi′ !

(1.13-9)

where ! !

α j′i =

∂xj ∂ xi′

!

(1.13-10)

! We will now consider a point vector V that can be written

in component form as: ! ! V = V1 eˆ1 + V2 eˆ2 + V3 eˆ3 = Vj eˆj !

(1.13-11) 29

where the components Vj follow the transformation law for

The matrix of direction cosines is known as the rotation matrix

vector components of an infinitesimal displacement given by

or transformation matrix since it specifies how one coordinate

equation (1.13-6). We can then write:

system is rotated with respect to another coordinate system.

∂ xi′ Vj ! ∂xj

!

Vi′ = α i j Vj =

!

Vj = α j′i Vi′ =

∂xj ∂ xi′

(1.13-12)

Vi′ !

(1.13-13)

The vector component transformation equations (1.13-12) and (1.13-13) can be written in matrix form. For example, equation (1.13-12) can be written as:

!

⎡ ⎢ ⎡ V1′ ⎤ ⎢ ⎢ ⎥ ⎢ ⎢ V2′ ⎥ = ⎢ ⎢ V′ ⎥ ⎢ ⎣ 3 ⎦ ⎢ ⎢ ⎢⎣

∂ x1′ ∂x1

∂ x1′ ∂x2

∂ x2′ ∂x1

∂ x2′ ∂x2

∂ x3′ ∂x1

∂ x3′ ∂x2

∂ x1′ ⎤ ∂x3 ⎥ ⎥⎡ V ⎤ 1 ∂ x2′ ⎥ ⎢ ⎥ ⎥ ⎢ V2 ⎥ ! ∂x3 ⎥ ⎢ V3 ⎥ ⎥ ⎣ ⎦ ∂ x3′ ⎥ ∂x3 ⎥⎦

(1.13-14)

!

⎤ ⎡ V1 ⎤ ⎥⎢ ⎥ ⎥ ⎢ V2 ⎥ ! ⎥⎢ V ⎥ ⎦⎣ 3 ⎦

!

Point vectors are are defined so as to be invariant to any

change of coordinate system (such a system is simply a mathematical tool). If xj and xi′ are two different rectangular coordinate systems having base vectors eˆj and eˆi′ , respectively, ! for any point vector V we must then have: ! V = V1 eˆ1 + V2 eˆ2 + V3 eˆ3 = V1′ eˆ1′ + V2′ eˆ2′ + V3′ eˆ3′ ! ! (1.13-16) We can write equation (1.13-16) as: ! ! V = Vj eˆj = Vi′ eˆi′ ! (1.13-17) ! where the V components Vj and Vi′ are a function of the and xi′ , respectively. We will now determine the relation between the components Vj and Vi′ that ! must exist for V to be a point vector; that is, for equation coordinate systems xj

or ⎡ V1′ ⎤ ⎡ α 11 α 12 α 13 ⎢ ⎥ ⎢ ⎢ V2′ ⎥ = ⎢ α 21 α 22 α 23 ⎢ V′ ⎥ ⎢ α α 32 α 33 ⎣ 3 ⎦ ⎣ 31

1.13.1! TRANSFORMATION LAW FOR POINT VECTORS COMPONENTS

(1.13-17) to be true. (1.13-15)

!

Any vector can be expressed as a linear combination of the linearly independent base vectors eˆj (see Section 4.1). This includes the unit base vectors eˆi′ , and so they must then have components in the eˆj basis. From equation (1.7-3) we obtain: 30

!

eˆi′ = cosθ i j eˆj !

(1.13-18)

where cosθ i j are the direction cosines between the xj and xi′ axes. With equations (1.13-18) and (1.13-4) we can write: !

eˆi′ = α i j eˆj !

eˆj = α ji′ eˆi′ !

(1.13-19)

Rewriting equation (1.13-17) using equations (1.13-19) and (1.13-5), we have: ! ! V = Vj eˆj = α i j Vi′ eˆj = α j′i Vi′ eˆj ! !

! V = Vi′ eˆi′ = α j′i Vj eˆi′ = α i j Vj eˆi′ !

Mathematical entities having components that follow these transformation laws are vectors. !

Comparing

equations

(1.13-23)

and

(1.13-24)

with

equations (1.13-12) and (1.13-13), we see that the requirement for point vectors to be invariant to coordinate system change as given by equation (1.13-17) leads to the transformation law for vector components of an infinitesimal displacement presented in equations (1.13-12) and (1.13-13). Therefore, the components

(1.13-20) (1.13-21)

of point vectors satisfy the same transformation laws as those for infinitesimal displacement.

Since equal point vectors must have equal components, we

From equations (1.13-9) and (1.13-24) we can confirm that ! the vector differential dr is a point vector. If the components

have:

of a point vector are specified in any coordinate system, they

!

Vj = α j′i Vi′ !

Vi′ = α i j Vj !

(1.13-22)

Using equations (1.13-22), (1.13-7), and (1.13-10), we obtain the transformation laws that point vector components must follow: !

!

∂ x′ Vi′ = α i j Vj = i Vj ! ∂xj Vj = α j′i Vi′ =

∂xj ∂ xi′

!

can be determined for any other coordinate system using equations (1.13-23) and (1.13-24).

1.13.2! TRANSFORMATION LAW FOR SCALARS !

!

(1.13-23)

A scalar has only a single component. Since by definition a

scalar is invariant to coordinate system transformation, its single component must also be independent of any coordinate system. If λ is a scalar in the xj coordinate system and λ ′ is the

Vi′ !

!

(1.13-24)

same scalar in the xi′ coordinate system, we then must always have: !

λ = λ ′ = real number !

(1.13-25) 31

Therefore any scalar must, by definition, be a real number that transforms into itself so that it satisfies a relation of the same

where, as noted in Section 1.13, dxj are the components of the ! position vector differential dr . Equation (1.14-2) can be written

form as that given in equation (1.13-25).

as:

1.14!

RELATIONS AMONG DIRECTION COSINES

!

( ds )2 = dxj dxj !

(1.14-3)

where we are using the Einstein summation convention. From

of relations among the direction cosines that are very useful in

equations (1.13-8) and (1.4-1), we see that equation (1.14-1) is ! the magnitude squared of the vector dr , and so we have:

deriving vector expressions. These relations will be presented

!

!

For rectangular coordinate systems, there exist a number

in the following two sections.

!

We will now consider a differential distance ds . This

distance is a physical quantity that is independent of the existence of any coordinate system that might be employed to describe it. Using the rectangular coordinate system ( x1, x 2 , x3 )

to specify ds , the square of ds can be obtained from repeated application of the Pythagorean theorem: !

( ds )

2

= ( dx1 ) + ( dx2 ) + ( dx3 ) ! 2

2

2

= ds !

(1.14-4)

Since the physical distance ds can be represented by a

is a scalar that can be expressed not only with respect to a rectangular coordinate system xj but also with respect to any rectangular coordinate system xi′ : !

( ds )2 = dxi′ dxi′ !

(1.14-5)

From equations (1.13-6) and (1.14-5), and using two dummy summation indices j and k , we can write:

(1.14-1)

or !

( dx1 )2 + ( dx2 )2 + ( dx3 )2

single number that is independent of any coordinate system, ds

1.14.1! ORTHOGONALITY CONDITIONS !

! dr = dr =

!

( ds )2 = α i j dxj α i k dxk !

(1.14-6)

or, rearranging factors:

( ds )

3

2

=

∑( dx ) j

2

!

(1.14-2)

!

( ds )2 = α i j α i k dxj dxk !

(1.14-7)

j =1

32

From the two expressions for ( ds ) given in equations (1.14-3) 2

!

and (1.14-7), we have: !

since from the definition of the Kronecker delta given in

dxj dxj = α i j α i k dxj dxk !

equation (1.14-10) we know that only when i = j will the

(1.14-8)

Kronecker delta equal 1 and not 0.

In order for equation (1.14-8) to be true, it is necessary that: !

αij αi k = δj k !

(1.14-9)

Example 1-5

where δ j k is called the Kronecker delta and is defined to have

Show that δ i i = 3 .

the following properties: !

⎧ 0 ⎪ δj k = ⎨ ⎪⎩ 1

( j ≠ k) ( j = k)

Solution: !

(1.14-10)

From equation (1.14-9) we then have α i j α i k = 0 unless k = j , in which case α i j α i k = 1 . Equation (1.14-7) can now be written as: !

( ds )2 = δ j k dxj dxk !

(1.14-11)

The Kronecker delta is also called the substitution operator because of its ability to change one index into another (see Example 1-4). Example 1-4 Evaluate δ i j bi . Solution:

δ i j bi = bj

!

δ i i = δ 11 + δ 22 + δ 33 = 1+ 1+ 1 = 3

For two rectangular coordinate systems xj and xi′ having the same origin (as in Figure 1-6), the orientation of the xj set of !

axes relative to the xi′ set of axes is completely determined by specifying three angles between the xj and xi′ axes. Therefore the transformation from one set of axes to the other has only three degrees of freedom, and so the nine direction cosines α i j given in equation (1.13-15) cannot all be independent. In fact there must be only six independent relations for the nine α i j . These six independent relations for the direction cosines α i j can be derived from equation (1.14-9). They are: !

(α11 )2 + (α 21 )2 + (α 31 )2 = 1!

(1.14-12)

We have: 33

!

α 11 α 12 + α 21 α 22 + α 31 α 32 = 0 !

(1.14-13)

!

α 11 α 13 + α 21 α 23 + α 31 α 33 = 0 !

(1.14-14)

!

(α12 )2 + (α 22 )2 + (α 32 )2 = 1!

(1.14-15)

!

(α11 )2 + (α12 )2 + (α13 )2 = 1!

(1.14-22)

!

α 12 α 13 + α 22 α 23 + α 32 α 33 = 0 !

(1.14-16)

!

α 11 α 21 + α 12 α 22 + α 13 α 23 = 0 !

(1.14-23)

!

(α13 )2 + (α 23 )2 + (α 33 )2 = 1!

(1.14-17)

!

α 21 α 31 + α 22 α 32 + α 23 α 33 = 0 !

(1.14-24)

!

(α 21 )2 + (α 22 )2 + (α 23 )2 = 1 !

(1.14-25)

!

α 31 α 11 + α 32 α 12 + α 33 α 13 = 0 !

(1.14-26)

!

(α 31 )2 + (α 32 )2 + (α 33 )2 = 1!

(1.14-27)

These six relations are known as the orthogonality conditions or orthonormality conditions for direction cosines. Equations (1.14-12), (1.14-15), and (1.14-17) are also known as the

so that α i j α k j = 0 unless k = i , in which case α i j α k j = 1 . The relation given in equation (1.14-21) yields another set of orthogonality conditions for direction cosines:

normalization conditions for direction cosines. !

By transforming equation (1.14-3) into the xi′ coordinate

system using equation (1.13-23), we obtain: !

( ds )2 = dxj dxj = α j′i dxi′ α j′k dxk′

= α j′i α j′k dxi′ dxk′ !

This set of orthogonality conditions is not independent of the (1.14-18)

through (1.14-17).

From equation (1.14-5) we then have: !

dxi′ dxi′ = α j′i α j′k dxi′ dxk′ !

(1.14-19)

Using equation (1.13-5) we can write: !

dxi′ dxi′ = α i j α k j dxi′ dxk′ !

(1.14-20)

In order for equation (1.14-20) to be true, it is necessary that: !

αij α k j = δi k !

set of orthogonality conditions given in equations (1.14-12)

(1.14-21)

Example 1-6

! Show that the magnitude of a point vector A is a scalar. Solution:

! In component form A can be written in an xj coordinate system as:

34

! A = aj eˆj

!

! The magnitude of A is given by: ! ! A = aj aj ! Transforming A to an xi′ coordinate system, we have: ! ! A = aj eˆj = ai′ eˆi′

1.14.2! OTHER RELATIONS FOR DIRECTION COSINES !

equation (1.13-15) is:

α 11 α 12 !

and

! A = ai′ ai′

!

From equation (1.13-23) we can write: !

ai′ = α i j aj

Therefore: ! ! A = ai′ ai′ = α i j aj α i k ak = α i j α i k aj ak From equation (1.14-9) we then have: ! ! A = ai′ ai′ = δ j k aj ak = aj aj ! We see that A is the same real number in both the xj and xi′ ! coordinate systems. The magnitude A is then independent

The determinant of the transformation matrix given in

α 13

α 21 α 22 α 23 = 1 ! α 31 α 32 α 33

(1.14-28)

as will be shown in Section 4.6. Expanding this determinant using the first row:

α 11

α 22 α 23 α 32 α 33

+ α 12

α 23 α 21 α 33 α 31

+ α 13

α 21 α 22 α 31 α 32

= 1 ! (1.14-29)

Comparing equation (1.14-29) with equation (1.14-22), we see that we must then have: !

α 11 =

!

α 12 =

!

α 13 =

of any coordinate system, and so the magnitude of a point ! vector A is a scalar.

α 22 α 23 α 32 α 33 α 23 α 21 α 33 α 31

α 21 α 22 α 31 α 32

= α 22 α 33 − α 32 α 23 !

(1.14-30)

= α 23 α 31 − α 21 α 33 !

(1.14-31)

= α 21 α 32 − α 22 α 31 !

(1.14-32) 35

Expanding equation (1.14-28) by the second row and using equation (1.14-25), and expanding equation (1.14-28) by the third row and using equation (1.14-27), we may similarly derive:

!

V′i =

∂ xi′ j V ! ∂xj

(1.15-2)

Point vector components that follow this transformation law are said to be contravariant components. The transformation

!

α 21 = α 32 α 13 − α 33 α 12 !

(1.14-33)

!

α 22 = α 33 α 11 − α 31 α 13 !

(1.14-34)

law given in equation (1.13-6) for components of the ! displacement vector dr can then be written as: !

dx ′ i =

∂ xi′ j dx ! ∂xj

(1.15-3)

!

α 23 = α 31 α 12 − α 32 α 11 !

(1.14-35)

!

α 31 = α 12 α 23 − α 13 α 22 !

(1.14-36)

!

α 32 = α 13 α 21 − α 11 α 23 !

where dx j are contravariant components of the displacement ! ! vector dr . The displacement vector dr given in equation

(1.14-37)

!

α 33 = α 11 α 22 − α 12 α 21 !

(1.14-38)

(1.13-8) becomes: ! ! dr = dx1 eˆ1 + dx 2 eˆ2 + dx 3 eˆ3 = dx j eˆj !

1.15! !

COVARIANT AND CONTRAVARIANT COMPONENTS OF VECTORS

also

Vi′ =

∂ xi′ V ! ∂xj j

! (1.15-1)

A second type of vector component transformation law exists.

Point

vector

components

that

follow

the

transformation law:

From equation (1.13-23) we have the vector component

transformation law: !

!

(1.15-4)

Wi′ =

∂ xj ∂ xi′

Wj !

(1.15-5)

are said to be covariant components. Indices for covariant vector components are written as subscripts. By convention

Indices for vector components that follow this transformation

then, subscripts indicate covariance and superscripts indicate

law are generally written as superscripts (the only exception is

contravariance.

for rectilinear coordinates): 36

!

By the chain rule of calculus, the partial derivative of a

scalar function φ is given by: !

∂φ ∂ xi′

=

∂φ ∂ xj ∂ xj ∂ xi′

=

∂ xj ∂φ ∂ xi′ ∂ xj

!

While the differences between contravariant and covariant

vector components are important in most coordinate systems (for example, all curvilinear coordinate systems), the same is !

(1.15-6)

From this transformation, we see that the partial derivatives of a scalar function are covariant components of a vector. This

not true for the rectangular coordinate system. From equations (1.13-4), (1.13-7), and (1.13-10), we have:

∂ xi′ ∂ xj ! = ∂ xj ∂ xi′

!

vector is known as the gradient vector (see Section 2.12). !

(1.15-7)

Any point vector can be written in terms of either

For the rectangular coordinate system, therefore, no difference

contravariant components or covariant components, and these

exists between the transformation laws for contravariant

components are not independent (see Section 6.5). Once either

components

set of components is known, the other can be determined.

transformation laws for covariant components as given in

Vectors whose components satisfy the coordinate system

equation (1.15-5). The position of the component indices is not

transformation laws given in either equations (1.15-2) or

important then, and we can write:

(1.15-5) are point vectors. A physical quantity that has both a

!

magnitude and a direction associated with it, but whose components do not satisfy the coordinate transformation laws given in either equation (1.15-2) or (1.15-5), is not a point vector. !

Point vectors are sometimes called contravariant vectors

or covariant vectors, but these terms really just describe the components used to express the vector and not the vector itself. The type of component (contravariant or covariant) will be referred to as the component variant.

as

given

in

equation

(1.15-2)

V i = Vi !

and

the

(1.15-8)

for point vector components in the rectangular coordinate system.

1.16! !

SCALAR OR DOT PRODUCT

The scalar product or dot product is one of several vector

products that have been defined and found to be very useful in describing physical phenomena. The scalar product of two ! ! vectors A and B is defined as: 37

!

! ! ! ! A • B = A B cosθ !

( 0 ≤ θ ≤ π )!

(1.16-1) ! where θ is the angle measured from the direction of vector A to ! !  the direction of vector B , and where A and B are the ! ! magnitudes of the vectors A and B , respectively. The scalar product for vectors is a vector operation requiring two vectors for the product to be formed. The result of a scalar product is a real number as can be seen from equation (1.16-1). In Section 1.16.6 we will show that the scalar product of two point vectors is invariant, and so the resulting real number is a scalar (hence the name scalar product; the name dot product comes from the notation used to represent the product). !

One geometric interpretation of the scalar product of two  ! ! ! vectors A and B is the magnitude of A multiplied by B cosθ , ! which is the projection of the magnitude of B on a line collinear ! with A (see Figure 1-7). A line is considered to be collinear with

Figure 1-7! !

! ! Scalar product of the vectors A and B .

The geometric interpretation of a scalar product is then

that one of the vectors in the product if projected onto the

direction).

other vector in the product. This can be written as: ! ! ! ! ! ! A • B = A ⎡⎣ projection of B on A ⎤⎦ ! ! ! ! = B ⎡⎣ projection of A on B ⎤⎦ ! (1.16-2)

!

!

a point vector if the point to which the vector is attached lies on the line and the point vector is directed along the line (in either Another geometric interpretation of the scalar product of ! ! ! two vectors A and B is the magnitude of B multiplied by ! ! A cosθ , which is the projection of the magnitude of A on a line ! collinear with B (see Figure 1-7).

Note that in equation (1.16-2) the projection of a vector is

understood to mean the projection of the vector’s magnitude. Mechanical work is an example of an entity that is well represented by the scalar product of two vectors: a force vector and a displacement vector (see Example 1-7). Work is just the 38

projection of the force in the direction of the displacement resulting from application of the force. Example 1-7

! What is the infinitesimal work dW done by a force F causing ! an infinitesimal displacement dr ? Solution:

! ! Let the angle between the force F and the displacement dr be θ . Then the component of force acting in the direction of ! ! dr is F cosθ and so the infinitesimal work done is: ! ! dW = ( F cosθ ) dr = F dr cosθ = F • dr ! !

From the definition of the scalar product given in equation ! ! ! (1.16-1) we see that if either A or B is a zero vector, or if A is ! orthogonal to B so that θ = π 2 , we will have: ! ! ! A• B = 0 ! (1.16-3) Therefore, if two vectors are orthogonal, their scalar product is zero. The sign of the scalar product as a function of the angle θ is given in Table 1-1. !

From the definition of the scalar product we see that the

scalar product is commutative: ! ! ! " A• B = B• A! !

(1.16-4)

Table 1-1! Scalar product sign values. Example 1-8 Verify the commutative law of the scalar product for two ! ! vectors A and B . Solution: ! ! ! ! A • B = A B cosθ = B A cos ( − θ ) = B A cosθ = B • A ! !

From the geometric interpretation of the scalar product,

we can also see that the scalar product is distributive with respect to addition (the sum of the projections equals the projection of the sum). ! ! ! ! ! ! ! ! A•(B + C) = A• B + A• C !

(1.16-5) ! ! ! The scalar product is not associative because A • ( B • C ) has no ! ! meaning since B • C is not a vector. 39

! !

Since cosθ = 1 when θ = 0 , from equation (1.16-1) we have: ! ! ! 2 2 A • A = A = ( A) ! (1.16-6)

and for the unit base vectors of the rectangular coordinate system, we have:

iˆ • iˆ = ˆj • ˆj = kˆ • kˆ = 1 !

!

(1.16-7)

!

! ! The scalar product of the vectors A and B can be written in terms of their components: ! ! ! A • B = ax iˆ + ay ˆj + az kˆ • bx iˆ + by ˆj + bz kˆ ! (1.16-11) !

(

! Since cosθ = 0 when θ = 90! , for two orthogonal vectors A ! and B we have: ! ! A• B = 0 ! ! (1.16-8)

or

and for the base vectors of the rectangular coordinate system,

!

!

we have:

iˆ • ˆj = iˆ • kˆ = ˆj • kˆ = ˆj • iˆ = kˆ • iˆ = kˆ • ˆj = 0 !

!

(1.16-9)

Equations (1.16-7) and (1.16-9) can be written using indicial notation: !

eˆi • eˆj = δ i j ! Example 1-9 Show that iˆ • iˆ = 1 and iˆ • ˆj = 0 Solution:

!

(1.16-10)

π iˆ • ˆj = iˆ ˆj cos = (1) (1) ( 0 ) = 0 2

! !

)(

)

( ) ( ) ( ) + a b ( ˆj • iˆ ) + a b ( ˆj • ˆj ) + a b ( ˆj • kˆ ) + a b ( kˆ • iˆ ) + a b ( kˆ • ˆj ) + a b ( kˆ • kˆ ) !

! ! A • B = ax bx iˆ • iˆ + ax by iˆ • ˆj + ax bz iˆ • kˆ y

x

y y

y z

z x

z y

z

z

Therefore using equations (1.16-7) and (1.16-9): ! ! ! A • B = ax bx + ay by + az bz !

(1.16-12)

(1.16-13)

From the definition of the scalar product, we also see that: ! ! ! ! ! ! ! ! ! λ ( A • B) = (λ A) • B = A • (λ B) = ( A • B) λ ! (1.16-14) where λ is any scalar. If µ is also a scalar, we have: ! ! ! ! ! ! ! (1.16-15) ( λ A ) • ( µ B ) = λ µ ( A • B ) == λ µ A • B ! ! ! ! Note that the expression ( A • B ) C can be written in the form ! ! ! σ C where A • B = σ is a scalar.

iˆ • iˆ = iˆ iˆ cos 0 = (1) (1) (1) = 1

40

Example 1-10 ! ! Evaluate A • B in component form using indicial notation.

1.16.2! DIVISION BY VECTORS

Solution:

defined for vectors. Let us assume that we have a vector

Using equation (1.16-10) we have: ! ! ! A • B = ai bj δ i j = ai bi

and σ is a known scalar. If this were an equation consisting entirely of scalars, the unknown factor could be uniquely determined by performing the operation of division. In the case

This is identical to equation (1.16-13).

of a vector equation, however, we note that there is not one ! unique answer for the unknown vector A . This follows from

1.16.1! VECTOR MAGNITUDE !

Using equations (1.16-6) and (1.4-1) we can write:

! ! ! A• A = A

2

( ) + (a ) !

= ( A ) = ( ax ) + ay 2

2

2

2

z

(1.16-16)

Therefore, in a rectangular coordinate system, the magnitude of ! a vector A is given by: !

! A =A=

( ax )

2

( ) + (a )

+ ay

2

or, in indicial notation: ! ! ! ! A = A = A • A = aj aj !

z

We will now show that the operation of division is not

equation of the form: ! ! A• B =σ ! ! (1.16-19) ! ! where A is an unknown vector, B is a known nonzero vector,

We can write: ! ! ! A • B = ai eˆi • bj eˆj = ai bj eˆi • eˆj

!

!

2

=

! ! A• A !

(1.16-17)

the fact that there are really two unknowns involved: the ! magnitude of A and the angle θ between the directions of the ! ! vectors A and B : !

! σ A cosθ = ! ! B

(1.16-20)

! Many A vectors having different magnitudes and directions could satisfy equation (1.16-20). Therefore, the operation of division is not defined for vectors.

(1.16-18)

Example 1-11 ! ! ! ! ! ! ! If we have A • B = A • C can we divide by A to obtain B = C ? 41

Solution:

!

We can rewrite: ! ! ! ! A• B = A•C !

(

)

(1.16-24)

! To obtain the three components of a vector A in a rectangular coordinate system xi′ , we can use equations (1.16-21) and

as ! ! ! A•(B − C) = 0

!

! ! A = A • eˆj eˆj !

! ! We see then it is not necessary that B = C since we will also ! ! ! ! ! ! ! ! have A • ( B − C ) = 0 if A = 0 or if A is perpendicular to B − C . As noted above, we cannot have division by vectors.

(1.16-22) to write: ! ! A • eˆ1′ = a1′ ! !

! A • eˆ2′ = a′2 !

! A • eˆ3′ = a′3 !

(1.16-25)

The components of a vector depend on the coordinate

system used and so are not scalars (they are real numbers, however). It is obvious then that scalar products having base vectors as a factor do not result in scalars. As noted in Section

1.16.3! VECTOR COMPONENTS

1.3, base vectors are not point vectors.

!

!

It is possible to obtain the component of a vector along the

direction of any given coordinate axis by taking the scalar product of the vector with a unit base vector directed along that ! axis. For example, to obtain the three components of a vector A in a rectangular coordinate system xj where: ! A = a1 eˆ1 + a 2 eˆ2 + a 3 eˆ3 ! ! we can calculate: ! ! ! ! A • eˆ1 = a1 ! A • eˆ2 = a 2 ! A • eˆ3 = a 3 ! ! and so the vector A can be written: ! ! ! ! ! A = ( A • eˆ1 ) eˆ1 + ( A • eˆ2 ) eˆ2 + ( A • eˆ3 ) eˆ3 ! or

(1.16-21)

It is possible to obtain the projection or resolute of a vector ! ! B in the direction of a vector A by first forming a unit vector ! ! Aˆ = A A (which will be in the direction of A ) and by then ! calculating the scalar product B • Aˆ : ! ! ! ! ! B projection on A direction = ⎡⎣ proj B ⎤⎦ A! = B • Aˆ ! ! ! = B Aˆ cosθ = B cosθ ! (1.16-26) This projection is represented in Figure 1-7.

(1.16-22)

1.16.4! ANGLE BETWEEN VECTORS !

(1.16-23)

! ! Given two vectors A and B , the angle θ between their

directions can be obtained using the scalar product as defined in equation (1.16-1): 42

! ! A• B ! cosθ = ! ! = A B

!

! ! A• B ! ! ! ! = A• A B• B

! ! A• B A

2

! ! A• B = ! (1.16-27) AB B2

! ! ! ! ⎡ A• B ⎤ −1 A • B ! θ = cos ⎢ ! ! ⎥ = cos AB ⎢⎣ A B ⎦⎥ −1

cosθ =

ai bi ai ai

!

!

!

! A • eˆ2 ( a1 eˆ1 + a 2 eˆ2 + a 3 eˆ3 ) • eˆ2 a2 cosθ1 = ! = = A A A

!

! A • eˆ3 ( a1 eˆ1 + a 2 eˆ2 + a 3 eˆ3 ) • eˆ3 a3 cosθ1 = ! = = A A A

!

For two different rectangular coordinate systems xj and

(1.16-28)

Equation (1.16-27) can be written in component form: !

!

! A • eˆ1 ( a1 eˆ1 + a 2 eˆ2 + a 3 eˆ3 ) • eˆ1 a1 cosθ1 = ! = = A A A

(1.16-29)

bi bi

Using equation (1.16-27), the direction cosines for a vector

! A can be determined by forming the scalar products between ! ! A A and the corresponding unit base vectors: ! ! ! A • eˆ1 A • eˆ2 A • eˆ3 ! cosθ1 = ! ! cosθ 2 = ! ! cosθ 3 = ! ! (1.16-30) A A A

xi′ , we can form the scalar product of equation (1.13-19) with

the unit base vector eˆj : !

Let the unit base vectors of the xj coordinate systems be eˆj . Using equation (1.16-30) we then have:

(1.16-31)

method for determining the direction cosines:

Example 1-12

Solution:

(no sum on j )!

Using equations (1.16-10) and (1.13-4), we have an important !

! Determine the direction cosines for a point vector A relative to a rectangular coordinate systems xj .

α i j eˆj • eˆj = eˆi′ • eˆj !

α i j = cosθ i j = eˆi′ • eˆj = eˆj • eˆi′ !

(1.16-32)

The transformation matrix of direction cosines given in equation (1.13-15) can now be written in the form:

!

⎡ V1′ ⎤ ⎡ eˆ1′ • eˆ1 eˆ1′ • eˆ2 ⎢ ⎥ ⎢ ⎢ V2′ ⎥ = ⎢ eˆ2′ • eˆ1 eˆ2′ • eˆ2 ⎢ V ′ ⎥ ⎢ eˆ′ • eˆ eˆ′ • eˆ ⎣ 3 ⎦ ⎣ 3 1 3 2

eˆ1′ • eˆ3 ⎤ ⎡ V1 ⎥⎢ eˆ2′ • eˆ3 ⎥ ⎢ V2 eˆ3′ • eˆ3 ⎥ ⎢ V3 ⎦⎣

⎤ ⎥ ⎥ ! (1.16-33) ⎥ ⎦

43

Example 1-13 Given two rectangular coordinate systems xj ! determine the dr components dxi′ from dxj .

1.16.5! INEQUALITIES and xi′ ,

Solution: Let the unit base vectors of the xj and xi′ coordinate systems be eˆj and eˆi′ respectively. Using equation (1.16-25) we then have: ! ! !

! dx1′ = dr • eˆ1′ = ( dx1 eˆ1 + dx2 eˆ2 + dx3 eˆ3 ) • eˆ1′ = dx1 ( eˆ1 • eˆ1′ ) + dx 2 ( eˆ2 • eˆ1′ ) + dx3 ( eˆ3 • eˆ1′ ) ! dx′2 = dr • eˆ′2 = ( dx1 eˆ1 + dx 2 eˆ2 + dx3 eˆ3 ) • eˆ2′

!

= dx1 ( eˆ1 • eˆ2′ ) + dx2 ( eˆ2 • eˆ2′ ) + dx3 ( eˆ3 • eˆ2′ ) ! dx3′ = dr • eˆ3′ = ( dx1 eˆ1 + dx2 eˆ2 + dx3 eˆ3 ) • eˆ3′

!

= dx1 ( eˆ1 • eˆ3′ ) + dx2 ( eˆ2 • eˆ3′ ) + dx3 ( eˆ3 • eˆ3′ )

!

The scalar products of the unit base vectors eˆj and eˆi′ are just the direction cosines α i j as can be seen from equation (1.16-32). Therefore we have: !

dxi′ = α i j dxj This is equation (1.13-6).

!

From the definition of the scalar product in equation

(1.16-1) we can write: ! ! ! ! ! ! A • B ≤ A • B = A B cosθ ! !

(1.16-34)

Using the fact that 0 ≤ cosθ ≤ 1 we have: ! ! ! ! ! A• B ≤ A B !

(1.16-35)

Therefore, from equations (1.16-35) and (1.16-34): ! ! ! ! A• B ≤ A B ! !

(1.16-36)

Equation (1.16-36) is known as the Schwarz inequality or the Cauchy-Schwarz inequality. ! !

We can also write: ! ! 2 ! ! ! ! ! ! ! ! ! ! A + B = ( A + B ) • ( A + B ) = A • A + 2 A • B + B • B ! (1.16-37)

or !

! ! 2 ! 2 ! ! ! 2 A + B = A + 2A • B + B !

(1.16-38)

Using equations (1.16-34) and (1.16-35), we have: ! ! 2 ! 2 ! ! ! 2 ! 2 ! ! ! 2 ! A + B ≤ A + 2 A • B + B = A + 2 A B + B ! (1.16-39) or

44

! ! 2 A+ B ≤

!

!

!

(A+B)! 2

(1.16-40)

and so: !

! ! ! ! A+ B ≤ A + B !

(1.16-41)

Equation (1.16-41) is known as the triangle inequality.

1.16.6! INVARIANCE OF THE SCALAR PRODUCT OF POINT VECTORS

! ! We have seen that the scalar product of any two vectors A ! and B is a real number. We will now show that this real ! ! number is a scalar when the vectors A and B are point vectors. We will do this by showing that the scalar product as given in equation (1.16-13) and Example 1-10 is unchanged by a transformation of coordinates and satisfies equation (1.13-25). !

Let us consider two rectangular coordinate systems

having the same origin where the primed system xi′ is rotated with respect to the unprimed system xj (see Figure 1-6). In ! ! component form the two point vectors A and B in the two coordinate systems are given by: ! ! A = a1, a2 , a3 = a1′, a′2 , a′3 ! !

! B = b1, b2 , b3 = b1′, b2′ , b3′ !

(1.16-42)

!

! ! A • B = aj bj !

(1.16-44)

!

! ! A • B = ai′ bi′ !

(1.16-45)

For the scalar product to be invariant we must have: ! ! ! A • B = aj bj = ai′ bi′ ! (1.16-46) !

for point vector components given by equation (1.13-23): !

ai′ = α i j aj !

(1.16-47)

!

bi′ = α i k bk !

(1.16-48)

With equations (1.16-47) and (1.16-48) we can rewrite equation (1.16-45) as: ! ! ! A • B = ai′ bi′ = α i j aj α i k bk = α i j α i k aj bk !

(1.16-49)

From equation (1.14-9) we have: ! ! ! A • B = ai′ bi′ = δj k aj bk !

(1.16-50)

or ! !

(1.16-43)

and the scalar product in each coordinate system is given by:

To determine if this is true, we use the transformation law

! ! A • B = ai′ bi′ = aj bj !

(1.16-51)

Equation (1.16-51) is identical to equation (1.16-46) and

has the same form as equation (1.13-25). Therefore, the scalar product of two point vectors is a scalar. 45

1.16.7! MATRIX NOTATION FOR SCALAR PRODUCT !

! ! For a given basis, the components of two vectors A and B

can be written using the matrix notation: ⎡ a1 ⎤ ⎥ ! ⎢ A = ⎢ a2 ⎥ ! ⎢ a ⎥ ⎢⎣ 3 ⎥⎦

⎡ b1 ⎤ ⎥ ! ⎢ B = ⎢ b2 ⎥ ! ! (1.16-52) ⎢ b ⎥ ⎢⎣ 3 ⎥⎦ ! ! The scalar product of the vectors A and B can then be written

as: T

!

⎡ a1 ⎤ ⎡ b1 ⎤ ⎥ ⎢ ⎥ ! ! ⎢ A • B = ⎢ a2 ⎥ ⎢ b2 ⎥ = ⎡ a1 a2 ⎣ ⎢ a ⎥ ⎢ b ⎥ ⎢⎣ 3 ⎥⎦ ⎢⎣ 3 ⎥⎦

⎡ b1 ⎤ ⎢ ⎥ a3 ⎤ ⎢ b2 ⎥ ! (1.16-53) ⎦ ⎢ b ⎥ ⎢⎣ 3 ⎥⎦

where the superscript T on a matrix indicates the transpose of the matrix.

1.17! !

VECTOR OR CROSS PRODUCT

Another product of vectors that has been defined and that

is useful in describing physical phenomena is the vector ! product or cross product. The vector product of two vectors A ! and B is defined as: ! ! ! ! ! ( 0 ≤ θ ≤ π )! A × B = A B sin θ nˆ ! (1.17-1)

! where θ is the angle 0 ≤ θ ≤ π from the direction of vector A to ! the direction of vector B , and where nˆ is a unit vector ! ! perpendicular (normal) to both A and B ; that is, nˆ is ! perpendicular to the plane containing the line collinear with A ! and the line collinear with B . The vector product is a vector operation requiring two vectors for the product to be formed. !

From the definition of the vector product given in

equation (1.17-1), we see that the product produces a vector (as opposed to the scalar product which produces a scalar). The ! ! magnitude of the vector product of two vectors A and B is given by: ! ! ! A × B = A Bsin θ !

( 0 ≤ θ ≤ π )!

(1.17-2)

or since sin θ ≥ 0 when 0 ≤ θ ≤ π , we have: ! ! ! ! ! ( 0 ≤ θ ≤ π )! A × B = A B sin θ !

(1.17-3)

!

Obviously there are two possible directions that the unit

normal vector nˆ could have and still be perpendicular to the ! ! plane containing a line collinear with A and a line collinear B ! ! (if A and B are not themselves collinear). The choice of direction for nˆ

is arbitrary, being determined only by convention which dictates that the direction of nˆ depends upon the nature of the coordinate system being used: right-handed or left-handed (see. Figure 1-8). 46

to obtain the same result as for a right-handed system. We will use only right-handed coordinate systems in this book.

Figure 1-8!

!

Right-handed and left-handed rectangular coordinate systems.

If the coordinate system is right-handed, the vector nˆ in

equation (1.17-1) is chosen to be directed so that a right hand held with the fingers curling counterclockwise from the vector ! ! A to the vector B will leave the thumb pointing in the positive nˆ direction (see Figure 1-9). This is known as the right hand

rule. If the coordinate system is left-handed, it is necessary to use the left hand for the vector product in order to leave the thumb pointing in the positive nˆ direction. For a left-handed system, a minus sign must then be placed in equation (1.17-1)

Figure 1-9! !

! ! Vector product of the vectors A and B .

The vector product of any two vectors will always be a

third vector orthogonal to these two vectors, unless the vector  ! ! ! ! product is 0 . If either A or B is zero, or if A is collinear with B so that θ = 0 or θ = π , we will have: ! ! ! ! A× B= 0!

(1.17-4) 47

We will always have: ! ! ! ! A × A = 0! !

or (1.17-5)

From the definition of the vector product given in

equation (1.17-1), vector products of base vectors of the

!

(

!

iˆ × ˆj = kˆ !

ˆj × kˆ = iˆ !

kˆ × iˆ = ˆj !

(1.17-6)

!

ˆj × iˆ = − kˆ !

kˆ × ˆj = − i !

iˆ × kˆ = − ˆj !

(1.17-7)

!

! iˆ × iˆ = ˆj × ˆj = kˆ × kˆ = 0 !

(1.17-8)

)

(

)

)

( ) ( ) ( ) + a b ( kˆ × iˆ ) + a b ( kˆ × ˆj ) + a b ( kˆ × kˆ ) ! (1.17-10)

+ ay bx ˆj × iˆ + ay by ˆj × ˆj + ay bz ˆj × kˆ

! !

rectangular coordinate system can be written:

(

! ! A × B = ax bx iˆ × iˆ + ax by iˆ × ˆj + ax bz iˆ × kˆ

z

x

z y

z

z

Using equations (1.17-6), (1.17-7), and (1.17-8), we then obtain: ! ! A × B = ax by kˆ − ax bz ˆj − ay bx kˆ + ay bz iˆ + az bx ˆj − az by iˆ ! (1.17-11) or ! ! A × B = ay bz − az by iˆ + ( az bx − ax bz ) ˆj + ax by − ay bx kˆ ! (1.17-12)

(

)

(

)

Equation (1.17-12) can be written in determinant form:



!

ˆj



eˆ1

eˆ2

! ! A × B = ax

ay

az = a1 a2

bx

by

bz

b1 b2

eˆ3 a3 !

(1.17-13)

b3

Expanding these determinants across the top row yields Table 1-2! Vector products of base vectors of the rectangular coordinate system.

equation (1.17-12). Although the elements of determinants

! ! The vector product of two vectors A and B can be written

very useful as a memory aid for equation (1.17-12). Equation

!

in terms of their components: ! ! ! A × B = ax iˆ + ay ˆj + az kˆ × bx iˆ + by ˆj + bz kˆ !

(

) (

)

(1.17-9)

should all be numbers and not vectors, equation (1.17-13) is (1.17-13) is also useful for gaining insight into the nature of vector products. From equation (1.17-13) we see that: ! ! ! ! A× B = −B× A ! ! (1.17-14) 48

! ! ! ! !Therefore, A × B must be orthogonal to both A and B since ! ! ! neither A nor B is equal to 0 .

since interchanging any two rows of a determinant changes the sign of the determinant. Therefore, the vector product is not commutative (it is anticommutative). !

Finally we have: ! ! ! A×0 = 0!

!

Example 1-15 (1.17-15)

Example 1-14 ! ! If A = 3 iˆ + 2 ˆj + kˆ and B = iˆ + ˆj − kˆ : ! ! a.! Find A × B . ! ! ! ! b.! Show that the vector A × B is normal to both A and B .

! ! For any two nonzero vectors A and B , show that: ! ! 2 ! ! 2 2 2 ! A × B + A • B = ( A) ( B) Solution:

!

!

= − 3 iˆ + 4 ˆj + kˆ

( )( ) ! ! ! ( A × B ) • B = ( − 3iˆ + 4 ˆj + kˆ ) • (iˆ + ˆj − kˆ ) = − 3 + 4 − 1 = 0

! ! ! b.! ( A × B ) • A = − 3 iˆ + 4 ˆj + kˆ • 3 iˆ + 2 ˆj + kˆ = − 9 + 8 + 1 = 0

!

! 2 ! 2 B sin 2 θ nˆ + A

!2 B cos 2 θ

2

2

2

2

2

or

a.! Using the determinant form of the vector product, we

iˆ ˆj kˆ ! ! A × B = 3 2 1 = ( −2 − 1) iˆ − ( −3 − 1) ˆj + ( 3 − 2 ) kˆ 1 1 −1

2

= ( A ) ( B ) sin 2 θ + ( A ) ( B ) cos 2 θ

!

Solution: have:

! ! 2 ! ! 2 ! A × B + A• B = A

!

!

! ! 2 ! ! 2 2 2 A × B + A • B = ( A) ( B)

This can also be written as: !

!

!

!

! !

!

( A × B ) • ( A × B ) = ( A )2 ( B )2 − ( A • B )2

!

The vector product is distributive with respect to

addition: !

! ! ! ! ! ! ! A × (B + C) = A × B + A × C !

(1.17-16)

This can be shown by expanding both sides of equation (1.17-16) in terms of components (see Example 1-16). 49

If λ is a scalar, then from the definition of the vector

!

product in equation (1.17-1) we also have: ! ! ! ! ! ! ! ! ! λ ( A × B) = (λ A) × B = A × (λ B) = ( A × B) λ !

! ! ! A × ( B + C ) = bx

!

(1.17-17)

ˆj



ay

az

! !

− by

+ cx

Example 1-16

! ! ! For any three nonzero vectors A , B , and C , show that: ! ! ! ! ! ! ! ! A × (B + C) = A × B + A × C



ˆj

ax

ay

− cy

iˆ ax

iˆ kˆ + cz az ax



ˆj



iˆ ax

kˆ az

ˆj



ay

az

+ bz

ˆj ay

Therefore we have:



!

Solution:

ˆj



! ! ! A × ( B + C ) = ax

ay

az + a x

ay

az

bx

by

bz

cy

cz

cx

Using the determinant form of the vector product, we have:

! ! ! A × (B + C) =

!



ˆj



ax

ay

az

bx + cx

by + cy

bz + cz

or !

Example 1-17

Expanding by the third row, we have: ! ! ! A × ( B + C ) = ( bx + cx )

!

! or

ˆj



ay

az

(

− by + cy

       A × (B + C) = A × B + A × C

)

iˆ ax

Show that: ! ! ! ! ! ! ! ( A − B) × ( A + B) = 2 ( A × B)

kˆ az

+ ( bz + cz )



ˆj

ax

ay

Solution: ! ! ! ! ! ! ! ! ! ! ! ! ! ( A − B) × ( A + B) = A × A + A × B − B × A − B × B and so from equation (1.17-5): ! ! ! ! ! ! ! ! ! ( A − B) × ( A + B) = A × B − B × A Using equation (1.17-14): 50

!

!

!

!

!

!

!

!

!

!

!

( A − B) × ( A + B) = A × B + A × B = 2 ( A × B)

Solution: ! ! 2 ! ! A× B = A

1.17.1! UNIT NORMAL TO A PLANE !

The unit normal nˆ is obtained from equation (1.17-1): ! ! ! ! ! ! A× B A× B A× B nˆ = ! ! = = ! ! ! ! (1.17-18) A Bsin θ A B sin θ A× B Equation (1.17-18) follows from the fact that sin θ ! ! nonnegative for the full range of θ : ( 0 ≤ θ ≤ π ) in A × B .

1.17.2! LAGRANGE’S IDENTITY !

The magnitude squared of a vector product of any two  ! nonzero vectors A and B can be expressed as Lagrange’s identity: !

! ! 2 ! A× B = A

2

! 2 ! ! 2 B − ( A • B) !

2

! B

!

! = A

2

! 2 ! B − A

! ! 2 ! A× B = A

2

! 2 ! ! 2 B − ( A • B)

Derive Lagrange’s identity: ! ! 2 ! A× B = A

(1− cos θ ) 2

2

! 2 B cos 2 θ

1.17.3! VECTOR PRODUCT OF POINT VECTORS IS A POINT VECTOR !

We now wish to show that the vector product of any two ! ! point vectors A and B is a point vector. To show this we will ! ! ! demonstrate that the components of A × B = C follow the transformation law for point vectors given in equation (1.13-23). !

(1.17-19)

2

Let us consider two rectangular coordinate systems

having the same origin (see Figure 1-6), where the primed system xi′ is rotated with respect to the unprimed system xj . In ! ! ! component form, the vectors A , B , and C in these two

Example 1-18

!

! = A

and so: !

is

! 2 B sin 2 θ

!

The vector product can be used to determine the unit

normal nˆ to the plane containing the two lines collinear with ! ! ! ! the vectors A and B (if A and B are not themselves collinear).

2

2

! 2 ! ! 2 B − ( A • B)

coordinate systems are given by: ! ! A = a1, a2 , a3 = a1′, a′2 , a3′ !

(1.17-20)

51

!

! B = b1, b2 , b3 = b1′, b2′ , b3′ !

(1.17-21)

! C = c1, c2 , c3 = c1′, c2′ , c3′ ! (1.17-22) ! ! and the vector product of A and B in these coordinate systems !

is given by: ! ! A × B = a2 b3 − a3 b2 , a3 b1 − a1 b3 , a1 b2 − a2 b1 !

= c1, c2 , c3 !

! and !

(1.17-23)

= c1′, c2′ , c3′ !

(1.17-24)

For the vector product to be a point vector, we must have: !

ci′ = α i j cj !

(1.17-25)

To determine if equation (1.17-25) is true, we use the transformation law for point vector components given by equation (1.13-23): ! !

ai′ = α i j aj ! bi′ = α i k bk !

(1.17-26)

c1′ = a′2 b3′ − a3′ b2′ !

(1.17-28)

Using equations (1.17-26) and (1.17-27), we can write equation (1.17-28) as: ! c1′ = (α 21 a1 + α 22 a2 + α 23 a3 ) (α 31 b1 + α 32 b2 + α 33 b3 ) !

! ! A × B = a′2 b3′ − a′3 b2′ , a3′ b1′ − a1′ b3′ , a1′ b2′ − a′2 b1′

!

!

− (α 31 a1 + α 32 a2 + α 33 a3 ) (α 21 b1 + α 22 b2 + α 23 b3 ) ! (1.17-29)

or !

c1′ = ( α 22 α 33 − α 23 α 32 ) ( a2 b3 − a3 b2 )

!

+ ( α 23 α 31 − α 21 α 33 ) ( a3 b1 − a1 b3 )

!

+ ( α 21 α 32 − α 22 α 31 ) ( a1 b2 − a2 b1 ) !

(1.17-30)

From equations (1.14-30), (1.14-31), and (1.14-32), we can then write equation (1.17-30) as: !

c1′ = α 11 ( a2 b3 − a3 b2 ) + α 12 ( a3 b1 − a1 b3 ) + α 13 ( a1 b2 − a2 b1 ) !

!

(1.17-31)

and so from equation (1.17-23): !

c1′ = α 11 c1 + α 12 c2 + α 13 c3 !

(1.17-32)

Following a similar procedure for the components c2′ and c3′ , (1.17-27)

We now examine the component c1′ of the vector product given

we have finally: !

ci′ = α i j cj !

(1.17-33)

by equation (1.17-24):

52

Therefore, the components of the vector product of two point vectors transform as vector components of a point vector, and so the vector product of two point vectors is a point vector.

1.17.4! PERMUTATION SYMBOLS !

The symbols ε i j k and ε i j k are known as a permutation

symbols, alternating symbols, or Levi-Civita symbols. These symbols are a function of the permutation or order of indices of mathematical entities. They are important in vector and tensor analysis, and are useful in working with determinants. The characteristics of ε i j k and ε i j k are given in Table 1-3.

Figure 1-10! Cyclic order and anti-cyclic order of indices. From Table 1-3 we have: Table 1-3! ε i j k and ε !

ij k

!

ε123 = ε 231 = ε 312 = ε 123 = ε 231 = ε 312 = 1 !

(1.17-34)

!

ε132 = ε 213 = ε 321 = ε 132 = ε 213 = ε 321 = −1 !

(1.17-35)

values.

The indices 1, 2, and 3 are considered to be in cyclic order

if they are in any one of the following orders: (123), (231), or (312), and in anti-cyclic order if they are in any of the following orders: (132), (213), or (321), (see Figure 1-10).

There are 27 values of ε i j k and 27 values of ε i j k , with all but the six values given in equations (1.17-34) and (1.17-35) being equal to zero. We also have for any values of the indices: 53

!

εij k = ε k ij = ε j k i = ε ij k = ε k ij = ε j k i !

(1.17-36)

!

ε i j k = − ε j i k = − ε i k j = − ε k j i = − ε j i k = − ε i k j = − ε k j i ! (1.17-37)

!

From equations (1.17-36) and (1.17-37) we see that a cyclic

!

indices does change the sign of the permutation symbol: !

εij k = ε k ij = ε j k i = − ε j i k = − εi k j = − ε k j i !

Show that: !

!

εij k = ε

1 = (i − j )( j − k )( k − i ) ! 2

where i , j , and k take any of the values 1, 2, and 3. Example 1-19 Show that: !

ε 321 = −1 Solution: Using equation (1.17-39):

!

εij k =

1 (i − j )( j − k )( k − i ) 2

we have:

δi j ε i j k = 0 Solution:

(1.17-38)

From the definitions of the Kronecker delta given in equation

The permutation symbols can be calculated using the equation: ij k

1 1 ( 3 − 2 )( 2 − 1)(1− 3) = (1)(1)( −2 ) = −1 2 2

Example 1-20

permutation of the indices does not change the sign of the permutation symbol, while an anti-cyclic permutation of the

ε 321 =

(1.14-10), and of the permutation symbol given in Table 1-3, we have:

(1.17-39) !

δi j = 0 !

if i ≠ j

!

εij k = 0 !

if i = j

Therefore, for any value of k we will always have: ! δi j ε i j k = 0

1.17.5! VECTOR PRODUCT IN INDICIAL NOTATION !

Vector products of base vectors of the rectangular

coordinate system are given in equations (1.17-6), (1.17-7), and (1.17-8). These equations can all be rewritten in indicial notation using the permutation symbol ε i j k as defined in Table 1-3: 54

eˆi × eˆj = ε i j k eˆk !

!

(1.17-40)

permutation symbols.

Example 1-21

!

Show that:

!

eˆi × eˆj = ε i j k eˆk

!

!

b.! for i = 1 and j = 2 c.! for i = 2 and j = 1 Solution: ! a.! eˆ1 × eˆ1 = ε111 eˆ1 + ε112 eˆ2 + ε113 eˆ3 = 0

Since all these permutation symbols have two equal

!

eˆ2 × eˆ1 = ε 211 eˆ1 + ε 212 eˆ2 + ε 213 eˆ3 = ε 213 eˆ3 = − eˆ3

i

j

k

= ε i j k eˆk • eˆk !

!

ε i j k = eˆi × eˆj • eˆk = ε

!

(1.17-44)

then the vector product is given by: ! ! ! ! C = A × B = ai eˆi × bj eˆj = ai bj eˆi × eˆj = ck eˆk ! ! Using equation (1.16-24), C can be written as: ! ! ! C = ( C • eˆk ) eˆk !

(no sum on k )! (1.17-41)

!

Therefore we have: ij k

(1.17-43)

(1.17-45)

(1.17-46)

)

Using equation (1.17-42) we can now write the vector product ! ! of two vectors A and B using the permutation symbol: ! ! ! A × B = ε i j k ai bj eˆk ! (1.17-48)

From equation (1.17-40) we can write:

( eˆ × eˆ ) • eˆ

! ! ! C = A× B!

(

b.! eˆ1 × eˆ2 = ε121 eˆ1 + ε122 eˆ2 + ε123 eˆ3 = ε123 eˆ3 = eˆ3

!

! C = ck eˆk !

Therefore, from equations (1.17-45) and (1.17-46) we have: ! ! ! ! ! A × B = C = ( C • eˆk ) eˆk = ai bj eˆi × eˆj • eˆk eˆk ! (1.17-47)

indices, they are all equal to zero.

c.!

If we have three point vectors: ! ! B = bj eˆj ! A = ai eˆi !

where

a.! for i = 1 and j = 1

!

Equation (1.17-42) is sometimes used as the definition of

(1.17-42)

!

We also can write equation (1.17-13) as: eˆ1 eˆ2 ! ! A × B = a1 a2 b1 b2

eˆ3 a3 = ε i j k ai bj eˆk !

(1.17-49)

b3 55

Note that equation (1.17-49) is simply the formula for expanding any 3 x 3 determinant, and so this equation provides

! ! ! ! A • B × C = ax

by

bz

cy

cz

− ay

bx

bz

cx

cz

+ az

bx

by

cx

cy

! (1.18-2)

justification for the determinant methodology given in equation or

(1.17-13).

1.18!

SCALAR TRIPLE PRODUCT

ax ! ! ! A • B × C = bx

ay

az

by

bz

cx

cy

cz

! ! ! ! ! ! ! If A , B , and C are vectors, the products A • ( B × C ) and ! ! ! ( A × B ) • C are known as scalar triple products. Scalar triple

!

products always include both a vector product and a scalar

and so the scalar triple product of three vectors in the

product. To calculate the value of a scalar triple product, the

rectangular coordinate system can be calculated from the

vector product must be determined first so that both factors in

determinant of the components of the vectors. ! ! ! ! The scalar triple product A × B • C can also be written as:

the remaining scalar product are vectors. The alternatives ! ! ! ! ! ! ( A • B ) × C and A × ( B • C ) have no meaning since both factors in a vector product must be vectors. Therefore, parentheses are not necessary for a scalar triple product and we can write ! ! ! ! ! ! A • B × C and A × B • C .

ay ! ! ! ! A × B•C = by

az bz

cx −

!

ax

az

bx

bz

(1.18-3)

cy +

ax

ay

bx

by

cz ! (1.18-4)

We then have:

!

In the rectangular coordinate system, the vector product ! ! ! ! ! B × C in the scalar triple product A • B × C is given by:

! ! ! B × C = bx

ˆj



by

bz

cx

cy

cz



! We then have:

!

= iˆ

by

bz

cy

cz

− ˆj

bx

bz

cx

cz

+ kˆ

bx

by

cx

cy

(1.18-1)

ax ! ! ! A × B • C = bx

ay

az

by

bz

cx

cy

cz

!

(1.18-5)

Comparing the scalar triple product in equation (1.18-5) to that in equation (1.18-3), we have: ! ! ! ! ! ! ! A• B × C = A × B•C !

(1.18-6) 56

Therefore the dot and cross operators in a scalar triple product can be interchanged without changing the value of the scalar triple product. Equation (1.18-6) is the fundamental identity

! !

! ! ! ! ! ! ! ! ! ! ! ! A• B × C = − A•C × B = − A × C • B = − B• A × C ! ! ! ! ! ! ! ! ! = − B × A • C = − C • B × A = − C × B • A ! (1.18-10)

for the scalar triple product.

Therefore any cyclic permutation of the factors in a scalar triple

!

product will not change the value of the product, while any

From determinant theory we can write equation (1.18-5) in

non-cyclic permutation of the factors will change only the

the form:

!

ax

ay

az

bx

by

bz

cx

cy

cz

bx

by

bz

= cx

cy

cz

= ax

ay

cx

cy

cz

ax

ay

az

bx

by

az ! (1.18-7) bz

Therefore we have:             ! A• B × C = A × B•C = B•C × A = B × C • A ! ! ! ! ! ! ! = C • A × B = C × A • B ! (1.18-8) Since we can also write:

!

ax

ay

az

bx

by

bz

cx

cy

cz

!

we have:

=−

ax

ay

az

cx

cy

cz

bx

by

bz

=−

=−

bx

by

bz

ax

ay

az

cx

cy

cz

cx

cy

cz

bx

by

ax

ay

bz ! (1.18-9) az

product sign. !

From equations (1.17-5) and (1.18-8), we can also conclude

that if any two of the vectors in a scalar triple product are equal, the scalar triple product will be zero: ! ! ! ! ! ! ! ! ! ! ! ! A• A × B = A × A• B = A• B × A = A × B• A ! ! ! ! ! ! ! = A • B × B = A × B • B = 0 ! (1.18-11) ! Example 1-22

! ! For any two nonzero vectors A and B , use equation (1.18-3) to show that:    ! A × B• A = 0 Solution: Using equation (1.18-3) we can write:

!

ax ! ! ! A • B × A = bx

ay

az

by

bz

ax

ay

az

=0

57

since, from determinant theory, we know that a determinant will have a value of zero if any two rows of the determinant are equal.

!

a1 a2 ! ! ! A × B • C = b1 b2

The scalar triple product is known as the box product. The

box product is often written using the notation: !!! ! ! ! ! ! ! ! ⎡⎣ A B C ⎤⎦ = A • B × C = A × B • C !

(1.18-12)

using the box notation: !!! !!! !!! ! ⎡⎣ A B C ⎤⎦ = ⎡⎣ B C A ⎤⎦ = ⎡⎣C A B ⎤⎦ !

(1.18-13)

!

!!! !!! !!! !!! ⎡⎣ A B C ⎤⎦ = − ⎡⎣ A C B ⎤⎦ = − ⎡⎣ B A C ⎤⎦ = − ⎡⎣C B A ⎤⎦ !

(1.18-14)

!

!!! !!! !!! ⎡⎣ A A B ⎤⎦ = ⎡⎣ A B A ⎤⎦ = ⎡⎣ A B B ⎤⎦ = 0 !

(1.18-15)

1.18.2! SCALAR TRIPLE PRODUCT IN INDICIAL NOTATION

! ! ! Using indicial notation, the scalar triple product A × B • C

can be written: ! ! ! ! ( A × B ) • C = ai bj eˆi × eˆj • ck eˆk = ai bj ck eˆi × eˆj • eˆk ! (1.18-16)

(

)

(

c1

!

Equations (1.18-8), (1.18-10), and (1.18-11) can all be rewritten

!

(1.18-17)

From equations (1.18-3) and (1.18-17) we obtain:

1.18.1! SCALAR TRIPLE PRODUCT IN BOX NOTATION !

Using equation (1.17-42) we have: ! ! ! ! A × B • C = ε i j k ai bj ck !

)

c2

a3 b3 = ε i j k ai bj ck !

(1.18-18)

c3

From matrix theory we know that a determinant of a

matrix is equal to the determinant of the transpose of the matrix. Therefore we also have:

!

a1 a2

a3

a1

b1

c1

b1 b2

b3 = a2

b2

c2 = ε i j k ai bj ck !

c1

c3

c2

a3 b3

(1.18-19)

c3

1.18.3! PERMUTATION SYMBOL RELATIONS !

We can write equation (1.18-18) in a slightly different form

by using different notation for the row elements of the determinant: ai = A1i , bj = A2 j , ck = A3k . Letting

{ Alm}

be the

matrix of components, we then have:

!

{ }

A11

A12

det Al m = A21 A22 A 31 A 32

A13 A23 = ε i j k A1i A2 j A3k ! (1.18-20) A 33 58

or transposing the matrix:

!

{ }

A11 A21

det Al m = A12

A22

A13

A23

!

A 31 A 32 = ε i j k Ai1 Aj 2 Ak 3 ! (1.18-21) A 33

Therefore we have: !

{ }

det Al m = ε i j k A1i A2 j A3k = ε i j k Ai1 Aj 2 Ak 3 !

A2 p

A3 p

A1p

A1q

A1r

ε p q r det Al m = A1q

A2q

A3q = A2 p

A2q

A1r

A2r

A3r

A3 p

A3q

A2r ! (1.18-23) A3r

(1.18-24)

!

Ai p

Aj p

Ak p

Ai p

Ai q

Ai r

ε i j k ε p q r det Al m = Ai q

Aj q

Ak q = A j p

Aj q

Aj r

Ai r

Aj r

Ak r

Ak q

Ak r

{ }

Ak p

δ kr

δ kp δ kq δ kr

(1.18-26) where the elements of the determinant are all Kronecker deltas.

!

{ }

δ 11 δ 12

δ 13

det δ l m = δ 21 δ 22 δ 23

δ 31 δ 32 δ 33

We see then that the matrix

(1.18-25)

If we now let A = Kronecker delta in equation (1.18-25), we have:

1 0 0 = 0 1 0 = 1 ! (1.18-27) 0 0 1

{δ lm}

is the identity matrix I .

Therefore, equation (1.18-26) becomes:

!

δip

δ iq

δ ir

ε i j k ε p q r = δ i q δ j q δ k q = δ j p δ j q δ j r ! (1.18-28) δ ir δ jr

Following similar logic to that above we obtain:

!

δ ir δ jr

δip δ jp δ kp

{ }

δ ir

!

We can then write:

ε p q r det Al m = ε i j k Ai p A j q Ak r = ε i j k Ap i Aq j Ar k !

δ iq

ε i j k ε p q r det δ l m = δ i q δ j q δ k q = δ j p δ j q δ j r

(1.18-22)

A1p

!

δip

We also have:

Equations (1.18-20) and (1.18-21) can be written in the form:

{ }

{ }

δip δ jp δ kp

δ kr

δ kp δ kq δ kr

and we have:

δ i1 δ j1 δ k 1 !

δ i1

δi2

εij k = δ i 2 δ j 2 δ k 2 = δ j1 δ j 2 δ i 3 δ j 3 δ k3

δi3 δ j3 !

(1.18-29)

δ k1 δ k 2 δ k3

Using equation (1.18-28) with p = k , we have: 59

!

δi k

δiq

δir

εij k ε k qr = δ j k

δjq

δ jr

δkk

Setting l = i we have: !

(1.18-30)

δkq δkr

!

εij k ε k qr = δ k k

δ ir

δ jq δ jr

−δ kq

ε i j k ε k i m = εj k i ε m k i = δ i i δ j m − δ i m δ j i !

(1.18-36)

ε j k i ε m k i = 3δ j m − δ j m = 2 δ j m !

(1.18-37)

or

Expanding the determinant using the third row, we have:

δ iq

!

δik

δ ir

δ j k δ jr

+δ kr

!

δik

δ iq

δ j k δ jq (1.18-31)

or

!

Interchanging the notation for the i and m indices: ! !

δij =

1 1 1 1 εj k m ε i k m = ε i k m εj k m = ε m i k ε m j k = ε k m i ε k m j 2 2 2 2 ! (1.18-38)

Setting i = j :

! εij k ε k qr = 3

δ iq

δ ir

δ jq δ jr



δiq

δ ir

δ jq δ jr

+

δir

δ iq

δ jr δ jq

! (1.18-32)

!

εi k m εi k m = 2δ i i = 6 !

(1.18-39)

Example 1-23 Interchanging columns of the third determinant, we obtain: ! εij k ε k qr = 3

δ iq

δ ir

δ jq δ jr



δiq

δ ir

δ jq δ jr



δiq

δ ir

δ jq δ jr

From equation (1.18-35) we have: !

εij k ε k qr =

δ iq

δ ir

δ jq δ jr

= δ iqδ jr − δ ir δ jq !

(1.18-34)

Setting q = l and r = m , we have the ε − δ identity: !

Solution:

! (1.18-33)

and so: !

Show that ε i j k ε j ki = 6 .

( )

2

ε i j k ε j ki = ε i j k ε k i j = δ ii δ j j − δ i j δ ji = δ ii − δ ii From Example 1-5:

!

ε i j k ε j ki = ( 3) − 3 = 6 2

ε i j k ε k l m = δ i l δ j m − δ i m δ j l = ε i j k ε l m k = ε i j k ε m k l ! (1.18-35) 60

! !

Multiplying equation (1.18-24) by ε p q r , we have:

{ }

ε p q r ε p q r det Al m = ε p q r ε i j k Ai p A j q Ak r = ε p q r ε i j k Api Aq j Ar k ! (1.18-40)

!

Using equation (1.18-28) we can write:

δip ! ! ! ! ! ! ( A × B • C )( D × E • F ) = δ j p δkp

Using equation (1.18-39), we can write: !

{ }

det Al m =

1 = ε p q r ε i j k Api Aq j Ar k ! (1.18-41) 6

1.18.4! PRODUCT OF TWO SCALAR TRIPLE PRODUCTS !

!

ap ! ! ! ! ! ! ( A × B • C )( D × E • F ) = bp

aq

ar

bq

br

cp

cq

cr

dp eq fr ! (1.18-47)

!

ap dp ! ! ! ! ! ! ( A × B • C )( D × E • F ) = bp dp

aq eq

ar fr

bq eq

cp dp

cq eq

br fr ! (1.18-48) cr fr

(1.18-42)

From the indicial form of the scalar product given in Example ! ! ! D × E • F = ε p q r dp eq fr !

Therefore ! ! ! ! ! ! ! ( A × B • C )( D × E • F ) = ε i j k ai bj ck εp q r dp eq fr ! or

δ j q δ j r ai bj ck dp eq fr ! (1.18-46) δ kq δ kr

and multiplying the determinant by the factors dp , eq , and fr :

and !

!

We can derive a relation for the product of two scalar

triple products. From equation (1.18-17), we have: ! ! ! ! A × B • C = ε i j k ai bj ck !

δ ir

Multiplying the determinant by the factors ai , bj , and ck :

1 ε p q r ε i j k Ai p A j q Ak r 6

!

δ iq

!

!

!

!

!

!

( A × B • C )( D × E • F ) = ε i j k εp q r ai bj ck dp eq fr !

(1.18-43)

(1.18-44)

1-10, we have: ! ! ! ! ! ! A • D A• E A• F ! ! ! ! ! ! ! ! ! ! ! ! ! A × B • C D × E • F = ( )( ) B! • D! B! • E! B! • F! ! (1.18-49) C•D C•E C•F ! ! ! 2 This equation can be used to derive an expression for ⎡⎣ A B C ⎤⎦ :

(1.18-45) 61

! ! ! ! ! ! A • A A• B A•C !!! 2 ! ! ! ! ! ! ⎡⎣ A B C ⎤⎦ = B • A B • B B • C ! ! ! ! ! ! ! C • A C • B C •C

!

vectors. From the definition of the scalar triple product, we (1.18-50)

This determinant is known as the Gram determinant.

1.18.5! SCALAR TRIPLE PRODUCTS OF COPLANAR VECTORS

! ! ! ! If A , B , and C are three non-zero vectors of a given point ! ! in a vector field, then the direction of the vector product B × C ! ! will be orthogonal to the directions of both B and C . If we also have: !

! ! ! A• B × C = 0!

(1.18-51) ! then the direction of A must be orthogonal to the direction of ! ! the vector product B × C . If we consider the projections of the ! ! ! three point vectors A , B , and C along straight lines having the

have:

!

From equation (1.18-13) we see that the absolute value of a

scalar triple product does not depend on the order of the three

(1.18-52)

and so the absolute value of the scalar triple product of the rectangular coordinate system base vectors is always unity.

1.19!

VECTOR TRIPLE PRODUCT

! ! ! ! ! ! ! If A , B , and C are vectors, the products A × ( B × C ) and ! ! ! ( A × B ) × C are known as vector triple products. For the vector triple product, we have: ! ! ! ! ! ! ! A × ( B × C ) ≠ ( A × B) × C ! (1.19-1)

and so the parentheses are necessary. Vector multiplication is not associative. Example 1-24

same directions as these vectors, we see that the projections of ! ! ! the vectors A , B , and C must then all lie in the same plane. Such vectors are referred to as being coplanar.

1.18.6! ABSOLUTE VALUE OF A SCALAR TRIPLE PRODUCT OF BASE VECTORS

⎡iˆ ˆj kˆ ⎤ = 1 ! ⎣ ⎦

!

Show that parentheses are necessary for the vector triple product iˆ × ˆj × ˆj . Solution: !

(iˆ × ˆj ) × ˆj = kˆ × ˆj = − iˆ !

(

)

iˆ × ˆj × ˆj = iˆ × 0ˆ = 0ˆ

and so the two possible vector triple products of iˆ × ˆj × ˆj give different results. Therefore, the desired vector triple product must be indicated by using parentheses. 62

! !

A fundamental identity for the vector triple product is: ! ! ! ! ! ! ! ! ! A × ( B × C ) = ( A • C ) B − ( A • B)C ! (1.19-2)

This identity is verified in Example 1-25. From this identity we ! ! ! see that the vector triple product A × ( B × C ) is a linear ! ! ! ! ! ! combination of the vectors B and C since A • C and A • B are both scalars. In other words, the vector triple product ! ! ! A × ( B × C ) can be resolved into component vectors along the ! ! vectors B and C .

! ! ! A × ( B × C ) = δ q i δ p j − δq j δ p i a p bi cj eˆq

( ) = (δ b δ c − δ c δ = ( b c − b c ) a eˆ

! !

qi

!

q

i

pj j

p

p

qj j

q

pi

)

bi a p eˆq

p q

or !

! ! ! A × ( B × C ) = ap cp bq eˆq − ap bp cq eˆq

(

)

(

)

and so finally: ! ! ! ! ! ! ! ! ! ! A × ( B × C ) = ( A • C ) B − ( A • B) C

Example 1-25

! ! ! For any three nonzero vectors A , B , and C , verify the vector triple product identity: ! ! ! ! ! ! ! ! ! ! A × ( B × C ) = ( A • C ) B − ( A • B)C Solution: ! ! ! Let D = B × C . Then from equation (1.17-48) we have: ! ! ! D = B × C = ε i j k bi cj eˆk ! Therefore ! ! ! ! ! ! A × ( B × C ) = A × D = ε p k q a p ε i j k bi cj eˆq or, moving indices in cyclic order: ! ! ! A × ( B × C ) = ε q p k ε k i j a p bi cj eˆq ! From the ε − δ identity given in equation (1.18-35):

Example 1-26

! ! ! For any three nonzero vectors A , B , and C , show that: ! ! ! ! ! ! ! ! ⎡⎣ A × ( B × C ) ⎤⎦ × C = ( A • C ) B × C ! Solution: From equation (1.19-2) we have: ! ! ! ! ! ! ! ! ! A × ( B × C ) = ( A • C ) B − ( A • B)C ! Therefore ! ! ! ! ! ! ! ! ! ! ! ! ! ⎡⎣ A × ( B × C ) ⎤⎦ × C = ( A • C ) B × C − ( A • B ) C × C ! ! ! or since C × C = 0 : ! ! ! ! ! ! ! ! ⎡⎣ A × ( B × C ) ⎤⎦ × C = ( A • C ) B × C !

63

1.19.1! JACOBI IDENTITY !

Another identity for vector triple products can be derived

by using equation (1.17-14) to write: ! ! ! ! ! ! ! ( A × B) × C = − C × ( A × B) ! From equations (1.19-3) and (1.19-2), we then have: ! ! ! ! ! ! ! ! ! ! ( A × B ) × C = − (C • B ) A + (C • A ) B ! or, reordering terms: ! ! ! ! ! ! ! ! ! ! ( A × B) × C = ( A • C ) B − ( B • C ) A ! !

(1.19-3)

From equation (1.19-2) we can write:          ! A × ( B × C ) = ( A • C ) B − ( A • B)C

               C × ( A × B ) = (C • B ) A − (C • A ) B = ( B • C ) A − ( A • C ) B

Adding these three equations, we have:           A × ( B × C ) + B × (C × A ) + C × ( A × B ) = 0 !

!

(1.19-5)

equation: ! ! ! ! ! ! ! ! ! ! ! ! ! ( A × B ) • (C × D ) = ( A • C )( B • D ) − ( A • D )( B • C ) ! (1.19-7) ! ! We can show this by initially considering ( A × B ) as a single

The vector triple product can be used to derive the vector

vector (resulting from the vector product) and using equation (1.19-6)

! ! ! For any three nonzero vectors A , B , and C show that: ! ! ! ! ! ! ! ! ! ! A × ( B × C ) + B × (C × A ) + C × ( A × B ) = 0 ! Solution:

!

(1.19-4)

This relation is known as the Jacobi identity. Example 1-27

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! B × (C × A ) = ( B • A ) C − ( B • C ) A = ( A • B ) C − ( B • C ) A

1.19.2! EXTENDED LAGRANGE‘S IDENTITY

It is also possible to derive the following relation for

vector triple products using equation (1.19-2): ! ! ! ! ! ! ! ! ! ! ! A × ( B × C ) + B × (C × A ) + C × ( A × B ) = 0 !

!

(1.18-6) to write: ! ! ! ! ! ! ! ! ! ! ! ! ! ( A × B ) • C × D = ( A × B ) × C • D = ⎡⎣( A × B ) × C ⎤⎦ • D ! (1.19-8) We then have from equation (1.19-5): ! ! ! ! ! ! ! ! ! ! ! ! ( A × B ) • (C × D ) = ⎡⎣( A • C ) B − ( B • C ) A ⎤⎦ • D ! or !

!

!

!

!

!

!

!

!

!

!

!

!

( A × B ) • (C × D ) = ( A • C )( B • D ) − ( A • D )( B • C ) !

(1.19-9)

(1.19-10)

which is equation (1.19-7). This equation is known as the extended Lagrange’s identity (see Section 1.17.2), and can be written in the form: 64

! ! ! ! ! ! A•C ( A × B ) • (C × D ) = B! • C!

!

! ! A• D ! ! ! B• D

Example 1-29

(1.19-11)

Example 1-28

Show that ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ( A × B ) • ( B × C ) + ( A • B ) ( B • C ) = 2 ( A • B ) ( B • C ) − ( A • C ) ( B )2

Derive Lagrange’s identity: ! ! A× B

!

2

! = A

2

Solution:

! 2 ! ! 2 B − ( A • B)

from the extended Lagrange’s identity:         A•C A• D ! ( A × B ) • (C × D ) = B • C B • D

! ! ! ! Using equation (1.19-10) with C = B and D = C and then adding to both sides of equation (1.19-10) the term ! ! ! ! ( A • B )( B • C ) , we have: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ( A × B ) • ( B × C ) + ( A • B ) ( B • C ) = 2 ( A • B ) ( B • C ) − ( A • C ) ( B )2

Solution: ! !2 ! ! ! ! A × B = ( A × B) • ( A × B)

!

Using the extended Lagrange’s identity (1.19-11), we then have:

            A• A A• B A• A A• B ( A × B ) • ( A × B ) = B • A B • B = A • B B • B

!

Vector triple product relations can also be used to derive ! ! ! ! relations for the product ( A × B ) × ( C × D ) . Considering the ! ! ! ! ! ! product ( A × B ) × ( C × D ) to consist of the three vectors ( A × B ) , ! ! C , and D , and using equation (1.19-2), we have: ! ! ! ! ! ! ! ! ! ! ! ! ! ( A × B ) × (C × D ) = ( A × B • D ) C − ( A × B • C ) D ! (1.19-12) or !

or !

!

! ! A× B

2

! = A

2

! 2 ! ! 2 B − ( A • B)

!

!

!

!

!! !

!

!!! !

( A × B ) × (C × D ) = ⎡⎣ A B D ⎤⎦ C − ⎡⎣ A B C ⎤⎦ D !

We can also derive: ! ! ! ! !!! ! ! ( A × B ) × ( A × C ) = ⎡⎣ A B C ⎤⎦ A !

(1.19-13)

(1.19-14)

as shown in Example 1-30, and 65

!

! ! ! ! ! ! !! ! !!! !!! !! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A C D ⎤⎦ ⎡⎣ B E F ⎤⎦ − ⎡⎣ A E F ⎤⎦ ⎡⎣ B C D ⎤⎦

!

!

(1.19-15)

as shown in Example 1-31. Example 1-30

! ! ! For any three nonzero vectors A , B , and C , no two of which are parallel, show that:       ! ( A × B ) × ( A × C ) = ⎡⎣ A B C ⎤⎦ A Solution: From equation (1.19-13) we can write:         ! ( A × B ) × ( A × C ) = ⎡⎣ A B C ⎤⎦ A − ⎡⎣ A B A ⎤⎦ C Using equation (1.18-15) we have:  ! ⎡⎣ A B A ⎤⎦ = 0 and so:       ! ( A × B ) × ( A × C ) = ⎡⎣ A B C ⎤⎦ A which is equation (1.19-14). !

By considering the vector triple product of the three ! ! ! ! vectors A , B , and ( C × D ) , and using equation (1.19-5), we can write: ! ! ! ! ! ! ! ! ! ! ! ! ! ( A × B ) × (C × D ) = ( A • C × D ) B − ( B • C × D ) A ! (1.19-16)

or

!

!

!

!

!! ! !

!! ! !

( A × B ) × (C × D ) = ⎡⎣ A C D ⎤⎦ B − ⎡⎣ B C D ⎤⎦ A !

!

(1.19-17)

Subtracting equation (1.19-17) from equation (1.19-13), we have the following relation for any four nonzero vectors: !! ! ! !! ! ! !! ! ! !!! ! ! ! ⎡⎣ B C D ⎤⎦ A − ⎡⎣ A C D ⎤⎦ B + ⎡⎣ A B D ⎤⎦ C − ⎡⎣ A B C ⎤⎦ D = 0 ! (1.19-18) This equation can also be written in the form: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ( A • B × C ) D = ( D • B × C ) A + ( A • D × C ) B + ( A • B × D)C !

!

(1.19-19)

Example 1-31

! ! ! ! ! ! For any six nonzero vectors A , B , C , D , E , and F , show that: ! ! ! ! ! ! !! ! !!! !!! !! ! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A C D ⎤⎦ ⎡⎣ B E F ⎤⎦ − ⎡⎣ A E F ⎤⎦ ⎡⎣ B C D ⎤⎦ Solution:             ! ⎡⎣ A × B C × D E × F ⎤⎦ = ( A × B ) × ( C × D ) • ( E × F ) Using equation (1.19-17) we have: ! ! ! ! !! ! ! !! ! ! ! ( A × B ) × (C × D ) = ⎡⎣ A C D ⎤⎦ B − ⎡⎣ B C D ⎤⎦ A

(

)

Therefore: ! ! ! ! ! ! !! ! ! !! ! ! ! ! ! ( A × B ) × (C × D ) • ( E × F ) = ⎡⎣ A C D ⎤⎦ B − ⎡⎣ B C D ⎤⎦ A • ( E × F )

(

)

or 66

!

! ! ! ! ! ! !! ! !!! !!! !! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A C D ⎤⎦ ⎡⎣ B E F ⎤⎦ − ⎡⎣ A E F ⎤⎦ ⎡⎣ B C D ⎤⎦

or with equation (1.18-13): ! ! ! ! ! ! !! ! !!! !!! ! !! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A B D ⎤⎦ ⎡⎣C E F ⎤⎦ − ⎡⎣ A B C ⎤⎦ ⎡⎣ D E F ⎤⎦

Example 1-32

! ! ! ! ! ! For any six nonzero vectors A , B , C , D , E , and F , show that: ! ! ! ! ! ! !! ! !!! !!! ! !! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A B D ⎤⎦ ⎡⎣C E F ⎤⎦ − ⎡⎣ A B C ⎤⎦ ⎡⎣ D E F ⎤⎦ and !

! ! ! ! ! ! !!! ! ! ! !!! ! ! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A B E ⎤⎦ ⎡⎣C D F ⎤⎦ − ⎡⎣ A B F ⎤⎦ ⎡⎣C D E ⎤⎦

Solution: From equation (1.18-14) we have: ! ! ! ! ! ! ! ! ! ! ! ! ! ⎡⎣ A × B C × D E × F ⎤⎦ = − ⎡⎣C × D A × B E × F ⎤⎦ Using equation (1.19-15) we can write: ! ! ! ! ! ! !!! ! !! !!! ! !! ⎡⎣ A × B C × D E × F ⎤⎦ = − ⎡⎣C A B ⎤⎦ ⎡⎣ D E F ⎤⎦ + ⎡⎣C E F ⎤⎦ ⎡⎣ D A B ⎤⎦ ! or with equation (1.18-13): ! ! ! ! ! ! !! ! !!! !!! ! !! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A B D ⎤⎦ ⎡⎣C E F ⎤⎦ − ⎡⎣ A B C ⎤⎦ ⎡⎣ D E F ⎤⎦ Similarly, from equation (1.18-14) we have: ! ! ! ! ! ! ! ! ! ! ! ! ! ⎡⎣ A × B C × D E × F ⎤⎦ = − ⎡⎣ E × F C × D A × B ⎤⎦ Using equation (1.19-15) we can write: ! ! ! ! ! ! !! ! !!! !!! !! ! ! ⎡⎣ A × B C × D E × F ⎤⎦ = − ⎡⎣ E C D ⎤⎦ ⎡⎣ F A B ⎤⎦ + ⎡⎣ E A B ⎤⎦ ⎡⎣ F C D ⎤⎦

Example 1-33

! ! ! For any three nonzero vectors A , B , and C , show that: !!! 2 ! ! ! ! ! ! ! ⎡⎣ A B C ⎤⎦ = ⎡⎣ A × B B × C C × A ⎤⎦ Solution: Using equation (1.19-15) we can write: ! ! ! ! ! ! !!! !!! !!! !!! ! ⎡⎣ A × B B × C C × A ⎤⎦ = ⎡⎣ B C A ⎤⎦ ⎡⎣ A B C ⎤⎦ − ⎡⎣ A C A ⎤⎦ ⎡⎣ B B C ⎤⎦ From equation (1.18-15) we have: ! ! ! ! ! ! !!! !!! ! ⎡⎣ A × B B × C C × A ⎤⎦ = ⎡⎣ B C A ⎤⎦ ⎡⎣ A B C ⎤⎦ and using equation (1.18-13): ! ! ! ! ! ! !!! !!! ! ⎡⎣ A × B B × C C × A ⎤⎦ = ⎡⎣ A B C ⎤⎦ ⎡⎣ A B C ⎤⎦ or !

!!! 2 ! ! ! ! ! ! ⎡⎣ A B C ⎤⎦ = ⎡⎣ A × B B × C C × A ⎤⎦

!

The vector triple product makes it possible to write any ! vector A in terms of two component vectors: one that is parallel

67

to and one that is orthogonal to an arbitrary unit vector nˆ (see ! Example 1-34). The vector A is given by: ! ! ! ! A = ( A • nˆ ) nˆ + nˆ × ( A × nˆ ) ! (1.19-20) !

The basic products involving vectors are summarized in

Table 1-4.

Example 1-34

! ! Show that for the arbitrary vectors A and nˆ , we can write A in the form: ! ! ! ! A = ( A • nˆ ) nˆ + nˆ × ( A × nˆ ) Solution: Using equation (1.19-2) we have: ! ! ! ! ! ! ( A • nˆ ) nˆ + nˆ × ( A × nˆ ) = ( A • nˆ ) nˆ + ( nˆ • nˆ ) A − ( nˆ • A ) nˆ !

! ! 2 ! = ( nˆ • nˆ ) A = nˆ A = A

1.20! RECIPROCAL VECTORS !

We will now consider two sets of linearly independent ! vectors e!i and ej′ . If the two vector sets are such that: ! ! ei • ej′ = δi j ! ! (1.20-1) then they are called reciprocal vector sets. Since the vector sets are each linearly independent, we have: ! ! ! ! ! ! ! e1′ • e2′ × e3′ ≠ 0 ! e1 • e2 × e3 ≠ 0 ! Table 1-4! Selected products of vectors and their types.

If the relations: ! ! ! ! e2 × e3 e3 × e1 ! ! ! e1′ = ! ! ! ! e2′ = ! ! ! ! e1 • e2 × e3 e1 • e2 × e3

(1.20-2)

!

! ! e ×e ! e3′ = ! 1 ! 2 ! ! e1 • e2 × e3

(1.20-3) 68

! ! hold, the two sets of vectors ei and ej′ constitute reciprocal

vector sets (see Example 1-35). !

From equation (1.20-3) we have for two reciprocal sets of

vectors: !

! ! ! ! ! ! ( e3 × e1 ) × ( e1 × e2 ) ! e2′ × e3′ = [ e!1 e!2 e!3 ]2

(1.20-4)

Using equations (1.19-13), (1.18-13), and (1.18-15), we have: ! ! ! ! ! ! ! ! ! e1 ! ! [ e3 e1 e2 ] e1 [ e3 e1 e1 ] e2 ! e2′ × e3′ = ! ! ! 2 − ! ! ! 2 = ! ! ! ! (1.20-5) [ e1 e2 e3 ] [ e1 e2 e3 ] [ e1 e2 e3 ]

! ! ! ! !

Therefore: !

! ! e1′ • e1 ! ! ! e1′ • e2′ × e3′ = ! ! ! ! [ e1 e2 e3 ]

(1.20-6)

From equations (1.20-1) and (1.20-2) we then obtain: !

[ e!1′ e!2′ e!3′ ] =

1 ! ! ! ≠ 0! [ e1 e2 e3 ]

! !

(1.20-7) !

! ! ! ! ! e1 • e3 × e1 e3 × e1 ! ! ! e1 • e2′ = e1 • ! ! ! = ! ! ! = 0 e1 • e2 × e3 e1 • e2 × e3 ! ! ! ! ! e1 • e1 × e2 e1 × e2 ! ! ! e1 • e3′ = e1 • ! ! ! = ! ! ! = 0 e1 • e2 × e3 e1 • e2 × e3 ! ! ! ! ! e2 × e3 e2 • e2 × e3 ! ! ! e2 • e1′ = e2 • ! ! ! = ! ! ! = 0 e1 • e2 × e3 e1 • e2 × e3 ! ! ! ! ! ! ! ! e2 • e3 × e1 e1 • e2 × e3 e3 × e1 ! ! ! e2 • e2′ = e2 • ! ! ! = ! ! ! = ! ! ! = 1 e1 • e2 × e3 e1 • e2 × e3 e1 • e2 × e3 ! ! ! ! ! e2 • e1 × e2 e1 × e2 ! ! ! e2 • e3′ = e2 • ! ! ! = ! ! ! = 0 e1 • e2 × e3 e1 • e2 × e3 ! ! ! ! ! e2 × e3 e3 • e2 × e3 ! ! ! e3 • e1′ = e3 • ! ! ! = ! ! ! = 0 e1 • e2 × e3 e1 • e2 × e3 ! ! ! ! ! e3 • e3 × e1 e3 × e1 ! ! ! e3 • e2′ = e3 • ! ! ! = ! ! ! = 0 e1 • e2 × e3 e1 • e2 × e3 ! ! ! ! ! ! ! ! e3 • e1 × e2 e1 • e2 × e3 e1 × e2 ! ! ! e3 • e3′ = e3 • ! ! ! = ! ! ! = ! ! ! = 1 e1 • e2 × e3 e1 • e2 × e3 e1 • e2 × e3

Example 1-35 Show that equation (1.20-1) is obtained whenever the vector relations given in equation (1.20-3) exist. Solution: !

! ! ! ! ! e2 × e3 e1 • e2 × e3 ! ! ! e1 • e1′ = e1 • ! ! ! = ! ! ! = 1 e1 • e2 × e3 e1 • e2 × e3

Example 1-36 Find the reciprocal vectors for the orthogonal unit base vectors iˆ , ˆj , and kˆ . Solution: 69

Let the reciprocal vectors of iˆ , ˆj , and kˆ be iˆ′ , ˆj ′ , and kˆ ′ . We

!

then have from equation (1.20-3):

this direction must be independent of convention and,

ˆj × kˆ iˆ = = iˆ ˆi • ˆj × kˆ iˆ • iˆ

!

iˆ′ =

!

ˆ ˆ ˆ ˆj ′ = k × i = j = ˆj iˆ • ˆj × kˆ iˆ • iˆ

!

iˆ × ˆj kˆ kˆ′ = = = kˆ ˆi • ˆj × kˆ iˆ • iˆ

Since the direction of a polar vector has an intrinsic sense,

therefore, of any change in the “handedness” of the coordinate system used. A change in the “handedness” from a righthanded to a left-handed coordinate system is accomplished by an inversion of the coordinate system through the origin by reversing the sign of one or all three axes as for example: x ′ = −x !

!

y′ = − y !

z′ = −z !

(1.21-1)

Such an inversion of a coordinate system is called a parity

Therefore orthogonal unit base vectors are their own

transformation. Under a parity transformation, all components

reciprocal vectors.

of polar vectors must change sign so that the polar vector itself will not change direction. This is characteristic of polar

1.21!

PSEUDOVECTORS AND PSEUDOSCALARS

vectors. !

Axial vectors are so named because they involve an axis or

An important classification that is used for vectors

perpendicular along which the axial vector is directed. For a

distinguishes those vectors representing entities having an

rotation, both the axis of rotation and the sense of rotation are

intrinsic sense of direction from those vectors representing

intrinsic. It is only the direction of the rotation vector along the

entities having a direction defined only by convention. Among

axis that must be established by convention. For a normal to a

the first kind, whose direction is intrinsic, are velocity and

surface, the perpendicular to the surface is intrinsic. It is again

acceleration vectors. These vectors are called polar vectors or

only the direction of the normal vector along the perpendicular

true vectors. Among the second kind, whose direction is

that must be established by convention.

defined only by convention, are rotation and normal vectors.

!

These vectors are referred to as axial vectors or pseudovectors.

“handedness” of the coordinate system can be examined using

!

The behavior of axial vectors under a change in the

the normal vector to a plane containing lines collinear with two 70

! ! polar vectors A and B that are not themselves collinear. This

! ! where A and B are point vectors. We see that the scalar

normal is given by: ! ! ! ! n = A × B! (1.21-2) ! ! Since A and B are polar vectors, all their components change ! sign under a parity transformation. The vector product of A ! and B involves only products of components as given in

product of two polar point vectors or two axial point vectors

equation (1.17-12): ! ! A × B = ay bz − az by iˆ + ( az bx − ax bz ) ˆj + ax by − ay bx kˆ ! (1.21-3)

(

)

(

)

 and so there will be no change in sign of the axial vector n as a result of the change in “handedness” of the coordinate system. Therefore, axial vectors change direction under an inversion of the coordinate system. This is characteristic of axial vectors. ! ! ! If A and B are both axial vectors, then the normal vector  n computed from equation (1.21-2) will also be an axial vector ! since no sign change occurs from a parity transformation. If A ! is an axial vector and B is a polar vector, a sign change will result from a parity transformation, and so the vector product will be a polar vector. !

The behavior of scalars under a parity transformation can

be examined using equation (1.16-13): ! ! ! A • B = ax bx + ay by + az bz !

(1.21-4)

will not change under a parity transformation. The result of such scalar products are, therefore, scalars. ! ! ! If A is an axial point vector and B is a polar point vector, ! ! however, the scalar product of A and B will change sign when there is a parity transformation. The sign of this “scalar” is then defined by convention and so the scalar product of an axial and a polar vector yields a pseudoscalar. From the above discussion we see that the scalar triple product of three polar point vectors ! ! ! ! ! ! A , B , and C will produce the pseudoscalar A × B • C . The results of different products of point vectors are summarized in Table 1-5. !

Because of their different behavior under coordinate axes

inversion, pseudovectors cannot be equated to vectors in an equation. Likewise, pseudoscalars cannot be equated to scalars. The base vectors eˆi have the sign properties of (and so can be considered to be) polar vectors since their direction and position are defined by association with a given coordinate system and cannot change. The vector product eˆ1 × eˆ2 is then an axial vector, and so the triple scalar product eˆ1 × eˆ2 • eˆ3 is the product of an axial and a polar vector, and is dependent on the “handedness” of the coordinate system.

71

! !

If the coordinate system is right-handed, we will have: eˆ1 × eˆ2 • eˆ3 > 0 !

(1.21-5)

and, if the coordinate system is left-handed, we will have: ! !

eˆ1 × eˆ2 • eˆ3 < 0 !

(1.21-6)

! Since the base vectors ei for a rectangular coordinate

system are unit orthogonal vectors, we can write equations (1.21-5) and (1.21-6), respectively, as: !

eˆ1 × eˆ2 • eˆ3 = 1 !

(1.21-7)

!

eˆ1 × eˆ2 • eˆ3 = −1 !

(1.21-8)

Table 1-5! Selected products of point vectors and their results. 72

Chapter 2 Differentiation and Integration of Vector Functions

! ∂f ∇f = eˆi ∂xi

73

!

A scalar function f is a mathematical function that can be

2.1!

defined for some set of continuum points. The scalar function f provides a correspondence between the coordinates of a point

P ( x, y, z ) belonging to this set of continuum points and the

scalar that is an attribute of the physical field particle abstracted to the point P ( x, y, z ) . Scalar functions describe scalar fields, therefore, and we will use these terms interchangeably. Using the rectangular coordinate system, we can write a scalar

function f in the form f ( x, y, z ) . ! ! A vector function A is a mathematical function that can be defined for some set of continuum points. The vector function ! A provides a correspondence between the coordinates of a

!

ORDINARY DIFFERENTIATION OF VECTORS

! We will now let A ( t ) be a vector function of a scalar

parameter t . For example, the parameter t can represent time.

In component form for a given point P ( x, y, z ) of the vector ! field, we can write A ( t ) as: ! ! A ( t ) = ax( t ) iˆ + ay( t ) ˆj + az ( t ) kˆ ! (2.1-1)

! We will assume that A ( t ) is a single-valued function of t . We

will also assume that the three component functions ax ( t ) , ! ay( t ) , and az ( t ) of A ( t ) are each continuous for all values for

point P ( x, y, z ) belonging to this set of continuum points and

which they are defined so that they are differentiable and have ! continuous first derivatives. The vector function A ( t ) will then

the point vector that is an attribute of the physical field point

also be continuous, differentiable, and will have continuous

particle abstracted to the point P ( x, y, z ) . Vector functions

first derivatives. We will make the same assumptions regarding

describe vector fields, therefore, and we will use these terms

differentiability and continuity for all vector fields dealt with in

interchangeably. Using the rectangular coordinate system, we ! can write a vector function A in the form: ! ! A ( x, y, z ) = ax ( x, y, z ) iˆ + ay ( x, y, z ) ˆj + az ( x, y, z ) kˆ ! (2-1)

this book.

!

Scalar and vector functions facilitate the application of

differential and integral calculus to physical problems. In this chapter we will consider differential and integral operations that can be applied to functions describing vector fields.

!

When the parameter t changes by a very small amount ! ! Δt , the vector function A ( t ) will change by Δ A , where:    ! Δ A = A ( t + Δt ) − A ( t ) ! (2.1-2) ! The vector Δ A can represent a very small change in magnitude, in direction, or in both magnitude and direction of the vector ! ! function A ( t ) . Since A ( t ) is continuous, we have: 74

! !

! ! lim A ( t + Δt ) → A ( t ) !

(2.1-3)

Δt → 0

! The ordinary derivative of the vector function A ( t ) with

respect to a scalar t in the rectangular coordinate system can then be obtained using equation (2.1-2): ! ! ! ! A (t + Δ t ) − A (t ) dA ( t ) ΔA ! ! = lim = lim Δt → 0 Δt Δt → 0 dt Δt

(2.1-4)

This derivative represents the differential change in the vector ! function A ( t ) as the parameter t changes by an infinitesimal ! amount. For the derivative dA ( t ) dt to exist, the limit in equation (2.1-4) must be unique, independent of the manner in which Δt → 0 . ! !

Writing equation (2.1-4) in component form, we have: ! ax ( t + Δt ) iˆ + ay ( t + Δt ) ˆj + az ( t + Δt ) kˆ dA ( t ) = lim Δt → 0 dt Δt

!

− lim

Δt → 0

ax ( t ) iˆ + ay ( t ) ˆj + az ( t ) kˆ Δt

!(2.1-5)

The rectangular unit base vectors iˆ , ˆj , and kˆ are constant both in magnitude (being unit vectors) and in direction (being directed along linear rectangular coordinate axes). Therefore

!

⎡⎣ay ( t + Δt ) − ay ( t ) ⎤⎦ ˆj Δt → 0 Δt

!

+ lim

!

+ lim

⎡⎣az ( t + Δt ) − az ( t ) ⎤⎦ kˆ ! Δt → 0 Δt

(2.1-6)

or

! dA ( t ) dax ( t ) ˆ day ( t ) ˆ daz ( t ) ˆ ! = i+ j+ k! dt dt dt dt ! We may also write the differential dA ( t ) as: ! dA ( t ) = dax ( t ) iˆ + day ( t ) ˆj + daz ( t ) kˆ ! !

(2.1-7)

(2.1-8)

!

For the rectangular coordinate system, therefore, the ! components of the derivative of a vector function A ( t ) are just the derivatives of the components of the vector function. ! Higher order derivatives such as d 2 A ( t ) dt 2 can be obtained by ! using procedures similar to those used to determine dA ( t ) dt . When discussing the derivative of a vector function, it is common to refer to it as simply the derivative of a vector. ! ! The magnitude of the derivative dA ( t ) dt is given by:

derivatives of these base vectors are zero. We can then rewrite equation (2.1-5) as:

! ⎡⎣ ax ( t + Δt ) − ax ( t ) ⎤⎦ dA ( t ) = lim iˆ Δt → 0 dt Δt

!

! dA ( t ) = dt

2 2 ⎡ dax ( t ) ⎤ ⎡ day ( t ) ⎤ ⎡ daz ( t ) ⎤ ⎢ dt ⎥ + ⎢ dt ⎥ + ⎢ dt ⎥ ! (2.1-9) ⎣ ⎦ ⎣ ⎦ ⎦ ⎣ 2

75

! Note that this is not the same as d A ( t ) dt which is given by: ! d A (t ) 2 d 2 2 = ! ⎡⎣ ax( t ) ⎤⎦ + ⎡⎣ ay( t ) ⎤⎦ + ⎡⎣ az ( t ) ⎤⎦ ! (2.1-10) dt dt ! ! In general dA ( t ) dt is not equal to d A ( t ) dt .

Example 2-1 ! Given A ( t ) = sint iˆ + cost ˆj + 2t kˆ , find:

! dA a.! ! dt

b.!

! dA ! dt

c.!

! d A

differentiable, then the following differentiation rules follow from equation (2.1-7): !

! ! d ! ! dA dB ( A ± B ) = dt ± dt ! dt

!

! ! d dA df ! ( f A ) = f dt + dt A ! dt

!

! ! ! dB dA ! d ! ! ( A • B ) = A • dt + dt • B ! dt

!

! ! ! dB dA ! d ! ! ( A × B ) = A × dt + dt × B ! dt

!

! ! d !!! ⎡ ! ! dC ⎤ ⎡ ! dB ! + A C ⎡ A B C ⎤⎦ = ⎢ A B dt ⎣ dt ⎥⎦ ⎢⎣ dt ⎣

dt

Solution: ! dA a.! = cost iˆ − sint ˆj + 2 kˆ dt ! dA = cos 2 t + sin 2 t + 4 = 5 b.! dt ! d A d d 4t 2 2 = sin 2 t + cos 2 t + 4 ( t ) = 1+ 4 ( t ) = c. ! 2 dt dt dt 1+ 4 ( t )

! !

(2.1-11)

(2.1-12)

(2.1-13)

(2.1-14) ! ⎤ ⎡ dA ! ! ⎤ ⎥ + ⎢ dt B C ⎥ ! (2.1-15) ⎦ ⎣ ⎦

! ! ! ! ! ⎡ ! dC ⎤ ! ⎡ dB ! ⎤ d ! ⎡ A × ( B × C ) ⎤⎦ = A × ⎢ B × + A×⎢ × C⎥ dt ⎣ dt ⎥⎦ ⎣ ⎣ dt ⎦ ! ! ! dA + × ⎡⎣ B × C ⎤⎦ ! (2.1-16) dt

Since the vector product is not commutative, the order of

2.1.1!

VECTOR DIFFERENTIATION RULES  !  ! If A ( t ) , B ( t ) , and C ( t ) are vector functions and f ( t ) is a scalar function, and if all these functions are continuously

factors in a vector product must be maintained when taking any derivative of a vector product.

76

Example 2-2 ! ! If A and B are vector functions, show that: ! ! ! dB dA ! d ! ! ! ( A • B ) = A • dt + dt • B dt Solution: Writing the scalar product in component form, we have:

d ! ! d A • B) = ax bx + ay by + az bz ( dt dt

(

!

)

or !

dby day db da db da d ! ! A • B ) = ax x + x bx + ay + by + az z + z bz ( dt dt dt dt dt dt dt

Reordering the terms, we have: !

dby day dbx dbz dax daz d ! ! A • B ) = ax + ay + az + bx + by + bz ( dt dt dt dt dt dt dt

and so: !

! ! ! dB dA ! d ! ! ( A • B ) = A • dt + dt • B dt

Example 2-3 ! If A ( t ) is an arbitrary vector function, show that:

!

! ! dA dA ! A• =A dt dt

and!

! d A

! dA ˆ = A• dt dt

Solution: Since the scalar product is commutative, we can write: ! ! ! ! ! ! dA 1 ⎡ ! dA dA ! ⎤ 1 d ( A • A ) 1 d ( A )2 dA ! A• = ⎢A • + • A⎥ = = =A dt 2 ⎣ dt dt dt 2 dt dt ⎦ 2 We then have: ! ! ! d A dA 1 ⎡ ! dA ⎤ dA ˆ ! = = A• ⎥ = A• dt dt A ⎢⎣ dt ⎦ dt Example 2-4 ! ! If A ( t ) is a constant so that only the direction of A ( t ) ! changes with time, what is the direction of dA dt relative to ! A? Solution: From Example 2-3 we can write: ! ! d A d A ! = 0 = Aˆ • dt dt ! ! Therefore dA dt must be perpendicular to A .

77

2.1.2!

DERIVATIVE OF A POINT VECTOR IS A POINT VECTOR

!

Using a rectangular coordinate system xj , we will now ! show that the derivative of a point vector A ( t ) is a point vector. ! The vector A ( t ) can be written in component form as: ! ! A ( t ) = aj ( t ) eˆj ! (2.1-17) and so equation (2.1-7) becomes: ! dA daj ! = eˆj ! dt dt

∂xj ∂ xi′

aj !

(2.1-18)

(2.1-19)

Therefore !

dai′ dt

=

d ⎡ ∂xj ⎤ aj ! dt ⎢⎣ ∂ xi′ ⎥⎦

therefore, the derivative of a point vector is a point vector.

2.2!

(2.1-20)

Since the coordinate directions are constant for rectangular

a time, and by then following a procedure similar to that given in equation (2.1-7). Therefore if we have: ! A ( u, v, w ) = ax ( u, v, w ) iˆ + ay ( u, v, w ) ˆj + az ( u, v, w ) kˆ ! (2.2-1) ! where u , v , and w are scalars, then we can write: ! ! ! ! ∂A ∂A ∂A ! dA = du + dv + dw ! ∂u ∂v ∂w

dt

=

∂xj d aj ∂ xi′ dt

!

(2.1-21)

(2.2-2)

where !

! ! ! ∂ A ( u, v, w ) A ( u + Δu, v, w ) − A ( u, v, w ) ! = lim Δu → 0 ∂u Δu

(2.2-3)

!

! ! ! ∂ A ( u, v, w ) A ( u, v + Δv, w ) − A ( u, v, w ) ! = lim Δv → 0 ∂v Δv

(2.2-4)

have:

dai′

If the components of a vector function depend upon more

be computed by allowing only one of these variables to vary at

coordinate systems (and not a function of any parameter t ), we

!

PARTIAL DIFFERENTIATION OF VECTORS

than one variable, partial derivatives of the vector function can

components aj to a rectangular coordinate system xi′ : ai′ =

given by equation (1.15-5). In a rectangular coordinate system,

!

Using equation (1.15-5), we can transform the point vector

!

and so the components daj dt of a derivative of a point vector ! A ( t ) transform as covariant components of a point vector as

78

! !

! ! ! ∂ A ( u, v, w ) A ( u, v, w + Δw ) − A ( u, v, w ) ! (2.2-5) = lim Δw → 0 ∂w Δw ! ! ! If A ( u, v, w ) , B ( u, v, w ) , and C ( u, v, w ) are vector functions

and f ( u, v, w ) is a scalar function, and if all these functions are

Similar partial derivative formulas can be written for differentiation with respect to v and w .

2.3!

continuously differentiable, then based upon equations (2.2-2)

POSITION VECTOR FUNCTION AND SPACE CURVES

! We will now consider a position vector function r ( t ) as

through (2.2-5) we have the following partial differentiation

!

formulas:

represented in a rectangular coordinate system: ! r ( t ) = x iˆ + y ˆj + z kˆ ! !

!

! ! d ! ! ∂A ∂B ( A ± B ) = ∂u ± ∂u ! ∂u

!

! ! d ∂A ∂ f ! ( f A ) = f ∂u + ∂u A ! ∂u

!

!

!

! !

(2.2-6)

(2.3-1)

where the x , y , and z components are given by the parametric (2.2-7)

! ! ! ∂B ∂A ! ∂ ! ! ( A • B ) = A • ∂u + ∂u • B ! ∂u

(2.2-8)

! ! ! ∂B ∂A ! ∂ ! ! ( A × B ) = A × ∂u + ∂u × B ! ∂u

!

(2.2-9)

! ! ! ! ! ! ⎡ ! ∂C ⎤ ! ⎡ ∂ B ! ⎤ ∂ A ! ! ∂ ! ⎡⎣ A × ( B × C ) ⎤⎦ = A × ⎢ B × + A × × C + × B ⎡ ⎢ ∂u ⎥ ∂u ⎣ × C ⎤⎦ ∂u ∂u ⎥⎦ ⎣ ⎣ ⎦

(2.2-11)

x = x (t ) !

y = y (t ) !

z = z (t ) !

(2.3-2)

We then have: !

   ∂  ⎡   ∂C ⎤ ⎡  ∂ B  ⎤ ⎡ ∂ A   ⎤ ⎡ A B C ⎤⎦ = ⎢ A B ⎥ + ⎢ A ∂u C ⎥ + ⎢ ∂u B C ⎥ ! (2.2-10) ∂u ⎣ ∂u ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

!

equations:

!

! r ( t ) = x ( t ) iˆ + y ( t ) ˆj + z ( t ) kˆ !

(2.3-3)

! As t varies, the position vector function r ( t ) will define a

succession of position vectors that will correspond to a

succession of points Pi ( x, y, z ) , i = 1, 2,! . These points will describe a curve C in space known as a space curve (see Figure ! 2-1). Therefore the position vector function r ( t ) defines the space curve. We will assume that the position vector components x ( t ) , y ( t ) , and z ( t ) exist and have continuous derivatives in the neighborhood of the space curve C . 79

!

! ! The derivative d r dt of the position vector function r ( t )

that defines the space curve C is given by: ! ! ! ! r ( t + Δt ) − r ( t ) Δr dr ! ! = lim = lim Δt dt Δt → 0 Δt Δ t → 0

(2.3-5)

If, for example, t is time, then d r! dt is the velocity of a point particle. The path of the point particle defines the space ! curve C . As Δ r approaches zero (when Δ t → 0 ), the direction ! of Δ r approaches that of the tangent to the space curve C (see !

Figure 2-2).

Figure 2-1!

A space curve C as defined by position vectors ! r ( t ) in a rectangular coordinate system. The separation of points on the curve C is greatly exaggerated in this figure for clarity.

As t changes by a very small amount Δt , the resulting ! very small change in the position vector function r ( t ) is given ! by Δ r , where:    Δ r = r ( t + Δt ) − r ( t ) ! ! (2.3-4) !

! ! The position vector then becomes r + Δ r (see Figure 2-2).

Figure 2-2!

Change in position vector along a curve. 80

! The derivative d r dt is, therefore, the tangent vector to ! the curve C at the point P specified by r ( t ) . From equations

!

(2.3-3) and (2.3-5), we have: ! ! d r = dx iˆ + dy ˆj + dz kˆ !

(2.3-6)

and

! d r dx ˆ dy ˆ dz ˆ = i+ j+ k! (2.3-7) dt dt dt dt ! where d r represents an infinitesimal displacement along the !

!

! ! dr dr ! dt dt d r Tˆ = ! = = ! dr dr dr dt dt

(2.4-1)

The unit tangent vector Tˆ is a point vector. This can be seen ! from equation (2.4-1) since we know that d r is a point vector (see Section 1.13). Using indicial notation together with

space curve C .

equations (1.13-8) and (1.15-8), we can write the position vector ! differential d r relative to a rectangular coordinate system as: ! ! d r = dx1 eˆ1 + dx 2 eˆ1 + dx 3 eˆ1 = dx1 eˆ1 + dx2 eˆ1 + dx3 eˆ1

!

!

For the following discussions we will assume that the

space curve C is smooth so that tangents to the curve exist. We ! will also assume that d r dt ≠ 0 . This will ensure that only one

= dx i eˆi = dxj eˆj ! (2.4-2)

From equation (1.14-4) the differential displacement along the

value of t corresponds to each point of the space curve.

space curve C is given by:

Therefore the space curve will not intersect itself (except at the

!

end points if the curve is closed). In other words, only if t1 = t 2 will we have: ! ! r ( t1 ) = r ( t 2 ) ! !

(2.3-8)

Such a curve is termed a simple curve or a Jordan curve.

2.4!

UNIT TANGENT VECTOR

! Since d r dt is tangent to the space curve C , the unit tangent vector Tˆ to the space curve C is given by:

!

!

! ! ! dr = dr • dr =

( dx1 )2 + ( dx2 )2 + ( dx3 )2

= dr = ds !

(2.4-3)

We will now replace the parameter t in equation (2.3-3)

with s ( t ) where s is arc length along the space curve C , and where s is a function of t . The space curve C is then defined by: !

 r ( s ) = x ( s ) iˆ + y ( s ) ˆj + z ( s ) kˆ !

(2.4-4)

The vector derivative of this vector function is:

81

!

! ! ! ! r ( s + Δs ) − r ( s ) dr Δr ! = lim = lim ds Δs → 0 Δs Δs → 0 Δs

(2.4-5)

! where ds is given by equation (2.4-3). When Δs → 0 , Δ r and ! Δs become equal in magnitude and so dr = ds as shown by ! equation (2.4-3). Therefore dr ds is a unit vector. Using equations (2.4-1) and (2.4-3), we can write: ! ! dr dr ! ! dt d r d r dt ! ! Tˆ = = = = dr ds dr ds dt dt

y = 4 (t ) ! 2

x = t + 1!

!

z = 3t + 2

Solution:

! The position vector r ( t ) is given by: ! 2 r = ( t + 1) iˆ + 4 ( t ) ˆj + ( 3t + 2 ) kˆ

!

The tangent vector to the curve C is:  dr ˆ ! = i + 8t ˆj + 3 kˆ dt

(2.4-6)

The unit tangent vector Tˆ ! dr 1 dt ! Tˆ = ! = 2 dr 10 + 64 ( t ) dt

The direction of the tangent vector Tˆ will always correspond to that of increasing arc length s . The derivative ds dt represents the magnitude of the change in arc length resulting from an

to the curve C is: ⎡iˆ + 8t ˆj + 3 kˆ ⎤ ⎣ ⎦

infinitesimal change in the parameter t . From equations (2.4-6) and (2.4-4), we have: ! d r dx ˆ dy ˆ dz ˆ ˆ ! T= = i+ j+ k! (2.4-7) ds ds ds ds ! ! Note that although d r ds and d r dt are both tangent vectors to ! ! C , d r ds is a unit vector while d r dt may not be a unit vector.

2.5! !

space curve C defined by the parametric equations:

Since Tˆ is a unit vector, we can write:

!

Tˆ • Tˆ = 1!

!

d ˆ ˆ T •T = 0 ! ds

(2.5-2)

dTˆ Tˆ • = 0! ds

(2.5-3)

Example 2-5 Determine the tangent vector and unit tangent vector to the

UNIT PRINCIPAL NORMAL VECTOR

(

)

(2.5-1)

or !

82

Therefore dTˆ ds must be orthogonal to Tˆ . Since Tˆ is tangent to the space curve C , dTˆ ds must be orthogonal to the space ! curve C . We will now define the vector K to be: ! dTˆ d 2 r! K= = ! ! (2.5-4) ds ds 2 where we have used equation (2.4-6). From equation (2.5-4) we ! see that K is a measure of how fast the unit tangent vector Tˆ changes with distance along the space curve C . Since Tˆ is a unit vector, however, only the direction of Tˆ can change along ! the curve. For this reason K is known as the curvature vector ! of the space curve. Since K is the derivative of a point vector, it is a point vector. If C is a straight line, then Tˆ remains collinear ! to C and so K = 0 . ! The vector dTˆ ds is orthogonal to the space curve C and ! so the curvature vector K = dTˆ ds is used to define a unit normal vector to the space curve C known as the unit principal normal vector Nˆ : !

! ! K dTˆ ds 1 dTˆ 1 d 2 r ˆ ! N= ! = = = 2 K dTˆ ds κ ds κ ds

(2.5-5)

where Nˆ is a unit vector having the same direction as the ! curvature vector K , and where κ is a constant given by:

!

! dTˆ κ= K = = ds

! dTˆ dTˆ d 2r • = = ds ds ds 2

! ! d 2r d 2r ! • ds 2 ds 2

(2.5-6)

and so κ is the magnitude of the curvature vector. We then have κ ≥ 0 .where κ is the curvature of the space curve. !

From equation (2.5-5) we have:

!

! dTˆ = κ Nˆ = K ! ds

(2.5-7)

and so Nˆ is a point vector. Using equations (2.5-7) and (2.5-3), we obtain: !

! Tˆ • K = Tˆ • κ Nˆ = 0 !

(2.5-8)

Tˆ • Nˆ = 0 !

(2.5-9)

or !

The unit tangent vector Tˆ is then orthogonal to the unit principal normal vector Nˆ as expected. ! Forming the scalar product of equation (2.5-7) with Nˆ , we have: !

! ! dTˆ ! κ Nˆ • Nˆ = κ = K • Nˆ = Nˆ • K = Nˆ • ds

Therefore κ

(2.5-10)

is a scalar invariant of the space curve C .

Differentiating equation (2.5-9), we have:

83

dNˆ dTˆ ˆ Tˆ • + • N = 0! ds ds

!

(2.5-11)

and so we can use equations (2.5-10) and (2.5-11) to express κ as: !

dTˆ dNˆ ! κ = Nˆ • = − Tˆ • ds ds

of Tˆ over a given arc length s , the larger will be κ . This is expected since κ is the curvature of the space curve. !

From equation (2.5-9) we see that the unit principal normal Nˆ is perpendicular to the space curve. From equation (2.5-5) we have:

(2.5-12) !

We can also use equation (2.5-7) to write: dTˆ ˆ Tˆ × = T × κ Nˆ = κ Tˆ × Nˆ ! ds

!

(2.5-13)

1 dTˆ 1 ΔTˆ ! Nˆ = = lim κ ds κ Δs → 0 Δs

(2.6-2)

and since κ ≥ 0 , Nˆ will have the same sign as dTˆ ds . Since Δs > 0 , the unit principal normal Nˆ will always point in the

Since Tˆ and dTˆ ds are orthogonal vectors and since Tˆ is a unit

direction of curvature of the space curve C at any given point.

vector, using equations (2.5-6) and (2.4-6) we have:

!

! ! dTˆ dTˆ d r d 2r ˆ ! κ= = T× = × ds ds ds ds 2

!

!

(2.5-14)

For κ ≠ 0 we can define:

R=

1 ! κ

(2.6-3)

where R is called the radius of curvature of C at any given

2.6!

CURVATURE OF A SPACE CURVE

!

We can use equation (2.5-6) to write:

!

! ! d 2r dTˆ ΔTˆ κ= K = 2 = = lim ! Δs → 0 Δs ds ds

point P . If the normal to the curve C at a point P is extended inward a distance R from the curve (in the plane of the curve at the given point), the end point of this normal (radius) is known

(2.6-1)

As we move a very small distance Δs along a space curve C , the very small change ΔTˆ that occurs in the unit tangent vector

as the center of curvature for C at P (see Figure 2-3). The plane of the curve in which the unit vectors Tˆ and Nˆ are acting (that is, in which lines collinear to Tˆ and Nˆ lie) is known as the osculating plane (kissing plane).

Tˆ to the curve C must be a change in direction only since Tˆ is

a unit vector. Therefore the greater the change in the direction 84

! ! ! r = Ds+ E ! (2.6-5) ! ! where D and E are constant vectors (independent of position

!

along the space curve). Equations (2.6-4) and (2.6-5) are vector equations for a straight line.

2.7! !

UNIT BINORMAL VECTOR The unit binormal vector Bˆ for the space curve C is

defined by the equation: !

Bˆ = Tˆ × Nˆ !

(2.7-1)

Since Tˆ and Nˆ are point vectors, Bˆ is also a point vector (see Section 1.17.3). The fact that Bˆ is a unit vector can be verified by computing its magnitude:

Figure 2-3!

!

Osculating plane with the circle of curvature for the space curve C at the point P . The center of the circle of curvature is located at point O.

For κ = 0 we see from equation (2.6-3) that C will have an

infinite radius of curvature, and so will be a straight line. We then have from equation (2.5-6): !

! d 2r = 0! ds 2

which has a solution of:

!

π Bˆ = Tˆ × Nˆ = Tˆ Nˆ sin = 1 ! 2

!

Using equation (2.7-1) we can write:

!

dNˆ ˆ dNˆ dBˆ dTˆ ˆ ˆ dNˆ ! = × N +T × = κ Nˆ × Nˆ + Tˆ × =T × ds ds ds ds ds

(2.7-2)

(2.7-3)

where we have used equation (2.5-7). ! Since Nˆ is a unit vector, Nˆ • Nˆ = 1 . Taking the derivative with respect to arc length s , we obtain:

(2.6-4) !

(

)

d ˆ ˆ N •N = 0! ds

(2.7-4) 85

and so: !

!

have:

dNˆ Nˆ • = 0! ds

(2.7-5)

Therefore dNˆ ds is orthogonal to Nˆ , and so from equation (2.7-1) we see that dNˆ ds must be coplanar with Bˆ and Tˆ . We then can write: !

)

(2.7-7)

dBˆ = − τ Nˆ ! ds

!

(2.7-8)

can define:

1 σ= ! τ

where σ is called the radius of torsion.

(

d Tˆ × Nˆ ds

)!

(2.7-11)

dTˆ ˆ ˆ ˆ dNˆ ! τ = − Nˆ • × N − N •T × ds ds

(2.7-12)

and so from equation (1.18-11): !

where τ is called the torsion of the space curve C . For τ ≠ 0 we

τ = − Nˆ •

or

equation (2.7-7) becomes:

!

(2.7-10)

Therefore the torsion τ is a scalar invariant of the space curve C . From equation (2.7-10) and the definition of Bˆ in equation

!

Since the unit point vectors Bˆ , Tˆ , and Nˆ of a space curve C are orthogonal, from equation (2.7-1) we have Nˆ = Bˆ × Tˆ , and so

!

dBˆ τ = − Nˆ • ! ds

(2.7-6)

dBˆ ˆ = T × λ Tˆ + τ Bˆ = Tˆ × τ Bˆ = − τ Bˆ × Tˆ ! ds

(

!

(2.7-1), we can write:

dNˆ = λ Tˆ + τ Bˆ ! ds

! From equation (2.7-3) and using Tˆ × Tˆ = 0 we have:

!

Forming the scalar product of equation (2.7-8) with Nˆ , we

!

dNˆ ! τ = − Nˆ • Tˆ × ds

(2.7-13)

The torsion τ can be expressed in terms of derivatives of

the position vector with respect to arc length s . From equations (2.5-5) and (2.6-3), we have:

(2.7-9)

!

! d 2r ˆ N=R 2 ! ds

(2.7-14)

and so equation (2.7-13) can be written using equation (2.4-6): 86

!

τ =−R

! ! ! ! ! ! d 2r d r d 3r d 2 r d 3r 2 dr • × R = R • × ! ( ) ds ds 2 ds 3 ds 2 ds ds 3

(2.7-15)

or using box notation: !

! ! ! ⎡ d r d 2 r d 3r ⎤ ! τ = ( R) ⎢ 2 3⎥ ds ds ds ⎣ ⎦ 2

!

!

normal to the plane of the curve. For a curve that remains in the same plane, the direction of Bˆ is constant (the magnitude of Bˆ is also constant since Bˆ is a unit vector). For a plane curve, we

(2.7-16)

With equations (2.6-3) and (2.5-6) we can then write:

! ! ! ⎡ d r d 2 r d 3r ⎤ ⎢ 2 3⎥ ! ! ! 1 ⎡ d r d 2 r d 3r ⎤ ⎣ ds ds ds ⎦ τ= ⎢ ! ! ! 2 3⎥= d 2r d 2r (κ )2 ⎣ ds ds ds ⎦ • ds 2 ds 2

Figure 2-3). From Bˆ = Tˆ × Nˆ , we see that the vector Bˆ is always

then have from equation (2.7-8):

(2.7-17)

then that the torsion τ provides a measure of the spatial rate of the curve C twisting out from the osculating plane. Example 2-6

of derivatives of the position vector with respect to arc length s:

! !

(2.7-19)

Since Nˆ = 1 we must then have τ = 0 for a plane curve. We see

The unit binormal vector Bˆ can also be expressed in terms

! ! ! ! dr d 2r d r d 2r ˆ ˆ ˆ ! B=T ×N = ×R 2 =R × ds ds ds 2 ds

dBˆ ! = 0 = − τ Nˆ ! ds

!

For the space curve defined by the parametric equations: !

(2.7-18)

Any space curve C that lies entirely in one plane is

referred to as a plane curve. A space curve that is not a plane

x = 3t !

y = 4 cost !

z = 4 sint

Find: a.! the unit tangent Tˆ b.! the curvature κ

curve is termed a twisted curve. If C is a plane curve, then τ = 0 . This can be seen by considering the vectors Tˆ and Nˆ that

c.! the unit principal normal Nˆ

are acting in the plane of the curve (the osculating plane). The vector Tˆ is tangent to the curve and the vector Nˆ is proportional to the change dTˆ ds in Tˆ along the curve (see

e.! the torsion τ

d.! the unit binormal Bˆ

Solution:

87

! ! The position vector r is given by r = 3t iˆ + 4 cost ˆj + 4 sint kˆ . ! dr a.! = 3 iˆ − 4 sint ˆj + 4 cost kˆ dt ! ds d r = = 9 + 16sin 2 t+ 16 cos 2 t = 5 ! dt dt ! ! d r d r dt 3 ˆ 4 4 ˆ ! T= = = i − sint ˆj + cost kˆ ds ds dt 5 5 5 b.! !

c.!

!

dBˆ dBˆ dt 3 3 = = cost ˆj + sint kˆ ds ds dt 25 25

!

dBˆ 3 3 3 τ = − Nˆ • = cos 2 t+ sin 2 t = ds 25 25 25

dTˆ 4 4 = − cost ˆj − sint kˆ dt 5 5

2.8!

dTˆ dTˆ dt 4 4 = = − cost ˆj − sint kˆ ds ds dt 25 25

C are orthogonal, they can be used as base vectors to define a rectangular coordinate system. In this case, the coordinate axes

dTˆ κ= = ds

!

dBˆ 3 3 = cost ˆj + sint kˆ dt 5 5

e.!

!

2

2

4 ⎡ 4 ⎤ ⎡ 4 ⎤ 2 2 ⎢⎣ − 25 ⎥⎦ cos t+ ⎢⎣ − 25 ⎥⎦ sin t = 25

d.! Bˆ = Tˆ × Nˆ =

ˆj

3 4 − sint 5 5 0 − cost

Since the unit point vectors Bˆ , Tˆ , and Nˆ of a space curve

cannot have any arbitrary orientation but are restricted by the intrinsic physical properties of the space curve that define the point vectors Bˆ , Tˆ , and Nˆ . For this reason these particular vectors can be both base vectors of a rectangular coordinate

1 dTˆ Nˆ = = − cost ˆj − sint kˆ κ ds



FRENET-SERRET FORMULAS

system and point vectors. The point vectors Bˆ , Tˆ , and Nˆ will form a right-handed kˆ 4 3 3 4 cost = iˆ + sint ˆj − cost kˆ 5 5 5 5 − sint

orthogonal triad as can be seen using equations (2.7-1) and (1.21-5): !

Bˆ • Bˆ = 1 = Bˆ • Tˆ × Nˆ = Bˆ × Tˆ • Nˆ > 0 !

(2.8-1)

From equations (2.7-1) and (1.17-6), we find that these base vectors also satisfy the equations: 88

!

Bˆ = Tˆ × Nˆ !

Nˆ = Bˆ × Tˆ !

Tˆ = Nˆ × Bˆ !

(2.8-2)

!

The rectangular coordinate system for which the point vectors Bˆ , Tˆ , and Nˆ are the base vectors is referred to as the trihedral at a point P of the space curve C . The trihedral moves along the space curve as the parameter t or s varies, and so we have a moving coordinate system (see Figure 2-4).

!

The trihedral at a point P defines three planes: the

osculating plane, the normal plane, and the rectifying plane. The osculating plane contains any two contiguous points of the space curve. Therefore the osculating plane contains two consecutive tangents to the space curve and is considered to be the plane in which the curve exists at each point. The vectors acting in and normal to the osculating, normal, and rectifying planes are given in Table 2-1.

Table 2-1! Planes defined by orthogonal triad vectors. !

We will now derive an important relation among the

vectors of the orthogonal triad by using: Figure 2-4!

The orthogonal triad of Bˆ , Tˆ , and Nˆ for a space curve C at a point P . The curve parameter s increases along C from left to right.

!

Nˆ = Bˆ × Tˆ !

(2.8-3)

from equation (2.8-2) to obtain: !

dNˆ dBˆ ˆ ˆ dTˆ ! = ×T + B× ds ds ds

(2.8-4) 89

With equations (2.7-8) and (2.5-7) we then have: !

!

dNˆ = − τ Nˆ × Tˆ + Bˆ × κ Nˆ = τ Tˆ × Nˆ − κ Nˆ × Bˆ ! (2.8-5) ds

(

) (

)

equation (2.7-5). Using equation (2.8-6) we can write:

or, finally from equation (2.8-2): ! !

From the Frenet-Serret formulas, we see that the vectors dTˆ ds and dBˆ ds are both collinear with the unit normal vector Nˆ . The vector dNˆ ds is orthogonal to Nˆ as can be seen from

dNˆ = τ Bˆ − κ Tˆ ! ds

(2.8-6)

Equations (2.5-7), (2.8-6), and (2.7-8) together are known as

!

!

dNˆ = τ Bˆ − κ Tˆ ! ds

dBˆ = − τ Nˆ ! (2.8-7) ds

These equations are very important in studies of the differential geometry of curves. They can be written in matrix form as:

!

!

⎡ dTˆ ds ⎤ ⎡ ⎢ ⎥ ⎢ 0 ˆ ⎢ dN ds ⎥ = ⎢ − κ ⎢ ⎥ ⎢ ⎢ dBˆ ds ⎥ ⎣ 0 ⎣ ⎦

κ 0 −τ

0 ⎤ ⎡ Tˆ ⎤ ⎢ ⎥ τ ⎥⎥ ⎢ Nˆ ⎥ ! 0 ⎥ ⎢ Bˆ ⎥ ⎦⎣ ⎦

2

= (τ ) + (κ ) ! 2

2

!

cT =

dNˆ ! ds

(2.8-10)

we have: !

( cT )2 = (τ )2 + (κ )2 !

(2.8-11)

This is Lancret’s Law and cT is called the total curvature of the (2.8-8)

The Frenet-Serret formulas express derivatives of the

orthogonal triad of the trihedral in terms of linear combinations

space curve at a point P . ! Since the curvature κ and the torsion τ of a space curve are scalars, they must express intrinsic (invariant) properties of the space curve. It can also be shown that any two curves having the same values of κ and τ are congruent. Both κ and

of members of the orthogonal triad. Note that any vector of a

τ can be expressed in terms of arc length:

vector field can be expressed in terms of this orthogonal triad

!

since the triad can represent base vectors. The transformation matrix given in equation (2.8-8) is antisymmetric.

(2.8-9)

Defining cT as

the Frenet-Serret formulas: dTˆ = κ Nˆ ! ds

dNˆ dNˆ dNˆ • = ds ds ds

κ = κ ( s )!

τ = τ (s) !

(2.8-12)

Equations (2.8-12) are then intrinsic equations of a space curve since they are independent of any coordinate system. 90

! !

! The Darboux vector ω is defined as: ! ω = τ Tˆ + κ Bˆ !

(2.8-13)

With the Darboux vector, we can write the Frenet-Serret formulas in symmetrical form: !

dTˆ ! ˆ =ω ×T ! ds

dNˆ ! ˆ =ω×N ! ds

2.9! !

A number of space curve variables can be expressed in ! terms of derivatives of the position vector r ( t ) with respect to time t . These derivatives describe the motion of a point particle

dBˆ ! ˆ = ω × B ! (2.8-14) ds

along the space curve. For example, the curvature κ and the torsion τ of a space curve can be written as:

!

! ! d r d 2r × dt dt 2 κ= ! 3 ! dr dt

(2.9-1)

!

! ! ! d r d 2 r d 3r × • dt dt 2 dt 3 ! τ= ! d 2 r! 2 dr × 2 dt dt

(2.9-2)

Example 2-7 Show that: ! d 3 r dκ ˆ 2 N + κ τ Bˆ − (κ ) Tˆ ! 3 = ds ds Solution: From equation (2.5-5) we have: ! d 2r ˆ ! 2 =κ N ds Therefore: ! d 3r dκ ˆ dNˆ ! = N + κ ds ds 3 ds

To show this, we will first introduce notation whereby differentiation with respect to time is represented by means of a dot. For example:

From equation (2.8-6) we then have: ! d 3r dκ ˆ dκ ˆ 2 ! N + κ τ Bˆ − κ Tˆ = N + κ τ Bˆ − (κ ) Tˆ 3 = ds ds ds

(

POSITION VECTOR DERIVATIVES

)

!

! d r !" =r! dt

! d 2 r ""! =r! dt 2

! d 3r """ ! ! (2.9-3) 3 = r dt

From equation (2.4-6) we have:

91

!

! ! !" d r d r ds r= = = s" Tˆ ! dt ds dt

(2.9-4)

Using equation (2.8-7): !

!

ˆ dTˆ ds ! 2 dT 2 "" r = "" s Tˆ + s" = "" s Tˆ + ( s" ) = "" s Tˆ + ( s" ) κ Nˆ ! ds dt ds

(2.9-5)

dTˆ ⎡ dNˆ ! 2 3 """ r = """ s Tˆ + "" s s" + 2 s" "" s κ + ( s" ) κ" ⎤ Nˆ + ( s" ) κ ! ⎦ ds ⎣ ds

(2.9-6)

From equations (2.9-9) and (2.9-10) we obtain the relations: ! ! ! ! r" × "" r r" × "" r κ= = ! 3 ! ! (2.9-12) 3 ( s! ) r" !

(

)

Rearranging terms, we have:

! 3 2 2 3 """ r = ⎡""" s − s" κ ⎤ Tˆ + ⎡ 3"" s s" κ + ( s" ) κ" ⎤ Nˆ + ( s" ) κ τ Bˆ ! (2.9-8) ⎣ ( ) ( ) ⎦ ⎣ ⎦

From equations (2.9-4), (2.9-5), and (2.8-2), we obtain: !

(2.9-9)

and using equation (2.9-8): !

! ! """ ! 6 2 r" × "" r • r = ( s" ) (κ ) τ !

! r" = s" Tˆ = s" Tˆ = s" !

(2.9-13)

(2.9-10)

Example 2-8 Find the curvature κ and radius of curvature R of the circle given by the parametric equation: ! r ( t ) = 2 cost iˆ + 2sint ˆj ! Solution: ! !" d r ! r= = − 2sint iˆ + 2 cost ˆj dt 2! ! d r "" ! r = 2 = − 2 cost iˆ − 2sint ˆj dt

Since s! is positive (it is the magnitude of velocity), we have from equation (2.9-4): !

! !⎤ ⎡⎣ r!" "" r """ r⎦ !" ""! 2 ! r ×r

Equations (2.9-12) and (2.9-13) are the same as equations (2.9-1)

! 2 3 ! """ r = """ s Tˆ + "" s s" κ Nˆ + ⎡ 2 s" "" s κ + ( s" ) κ" ⎤ Nˆ + ( s" ) κ τ Bˆ − κ Tˆ ! (2.9-7) ⎣ ⎦

! ! 3 r" × "" r = ( s" ) κ Bˆ !

( s! )6 (κ )2

! ! """ ! r" × "" r •r = ! !2 = r" × "" r

and (2.9-2), respectively.

or using equation (2.8-7):

!

τ =

! ! """ ! r" × "" r •r

(2.9-11)

and so using equation (2.9-12): !

(

)

! ! 4 sin 2 t + 4 cos 2 t kˆ r" × "" r 4 κ= ! 3 = = 3 r" − 2sint iˆ + 2 cost ˆj 4 sin 2 t + 4 cos 2 t

(

)

3

= 2

1 2 92

!

R=

1 =2 κ

2.10! !

ARC LENGTH

The distance s along a space curve C can be obtained by

!

The unit base vectors Tˆ , Nˆ , and Bˆ of the trihedral can also ! be written in terms of time derivatives of the position vector r .

integrating the differential of arc length ds :

From equations (2.9-4) and (2.9-11), we have: ! ! r" r" ˆ ! T= = ! ! s! r"

! (2.9-14)

Rewriting equation (2.9-9) using equation (2.9-12), we have: ! ! ! ! r" × "" r r" × "" r ˆ ! B= 3 = ! ! ! (2.9-15) " "" ( s! ) κ r × r

!

!



(2.9-16)

ds dt = dt





dr dt ! dt

(2.10-1)





Therefore we have:

s=



t2

t1

! ! ! or, since r" is orthogonal to r" × "" r , the sine of the angle between !" !" ""! r and r × r has a magnitude of one, and we can write: ! ! !" r" × "" r)× r ( ˆ N= ! ! ! ( r" × ""r ) × r"

∫ ∫ ds =

To calculate the arc length from t1 to t 2 , we then write: ! ! ! t2 t2 t2 t2 ds dr dr dr dr !s = dt = dt = dt = • dt ! (2.10-2) dt dt dt dt dt t1 t1 t1 t1

!

Using Nˆ = Bˆ × Tˆ we then have: ! ! !" r" × "" r)× r ( ˆ N= ! ! ! ! r" × "" r r"

s=

2.11! !

2

2

2

⎡ dx ⎤ ⎡ dy ⎤ ⎡ dz ⎤ ⎢⎣ dt ⎥⎦ + ⎢⎣ dt ⎥⎦ + ⎢⎣ dt ⎥⎦ dt !

(2.10-3)

PARTICLE MOTION ALONG A SPACE CURVE

We will now consider the motion of a point particle. The

mathematical study of particle motion without regard for the !

(2.9-17)

originating force of the motion is referred to as kinematics. A point particle is an abstraction of a field particle to a point as noted in Section 1.1. The kinematics of a point particle is the analysis of the motion of a point in space traveling along a path that defines a space curve C . 93

!

!  It is possible to express the velocity v and acceleration a

of a point particle in terms of the variables discussed in ! previous sections. Let r be the position vector for the moving point particle, s be a measure of distance (arc length) along the space curve C traveled by the particle, and t be time. The space curve C is called the trajectory of the particle. ! ! The velocity v of the point particle is a point vector given

! drˆ ! d r d ( r rˆ ) dr ˆ v= = = r+r ! dt dt dt dt ! The acceleration a of the particle is given by:

!

( )

ˆ  dv ˆ dTˆ dv ˆ dTˆ ds  dv   d v T ! (2.11-5) a= =v=r = = T +v = T +v dt dt dt dt dt ds dt and so:

by: !

   d r d r ds ds ˆ  v= = = T = s Tˆ = r ! dt ds dt dt

and its magnitude v is given by: ! dr ds ˆ ds ! ! v= v = = T = = s! ! dt dt dt

(2.11-4)

! (2.11-1)

ˆ ! dv ˆ 2 dT ! a= T + (v) dt ds

(2.11-6)

From equation (2.5-7), we then have: 2

(2.11-2)

The scalar magnitude v is called the speed of the point particle. ! We can write the vector velocity v as:  v = v Tˆ ! ! (2.11-3) ! The velocity v can then be represented by a vector tangent to the curve C at the point of the particle’s location at time t . The ! direction of the point vector v is that of the instantaneous ! velocity at time t and the magnitude of v is the speed v = ds dt . From equation (2.11-1), we can also write:

d 2s ! dv ˆ 2 2 ⎡ ds ⎤ ! a= T + ( v ) κ Nˆ = 2 Tˆ + κ ⎢ ⎥ Nˆ = "" s Tˆ + κ ( s" ) Nˆ ! (2.11-7) dt dt ⎣ dt ⎦ Using equation (2.6-3), we can write equation (2.11-7) as:

! dv ˆ ( v ) ˆ a= T+ N = aT Tˆ + aN Nˆ ! dt R 2

!

(2.11-8)

where aT and aN are the tangential and normal components, ! respectively, of the acceleration a . Note that the particle ! acceleration a has no binormal component and so acts only in the osculating plane (see Figure 2-4). Also note that aT = dv dt ! is not the magnitude of a , but is the rate of change in the ! magnitude of v . Similarly, aN is the rate of change in direction ! of v (the centripetal acceleration). Only if the particle motion is 94

! does not deviate from a straight line will we have a = aT .

! ! ! v × a • a" τ = ! !2 ! v×a

!

Using equations (2.11-8) and (2.6-3) we have: !

dv d 2 s aT = = ! dt dt 2

aN =

( v )2 R

= κ (v)

2

2

⎡ ds ⎤ = κ ⎢ ⎥ ! (2.11-9) ⎣ dt ⎦

! !

( aT ) + ( a N ) 2

!

(2.11-10)

If the trajectory of the particle is a circle, the radius of

circle. From equations (2.11-3) and (2.11-8), we see that the ! velocity vector v for the moving particle is tangent to its space ! curve C , while the acceleration vector a has both a tangential and a normal component. The unit tangent vector Tˆ to the space curve of a moving

name “nabla” comes from an ancient Assyrian harp that has a form similar to ∇ . !

(2.11-11)

The gradient operator is defined in the rectangular

coordinate system as: ! !

(2.11-3):

! ! ! d r r" v Tˆ = = = ! ds s" v

Much of the power of vector analysis is a direct result of

operator known as the gradient operator or Hamiltonian ! ! operator ∇ . The operator symbol ∇ is called del or nabla. The name “del” comes from the capital Greek letter delta ( Δ ). The

point particle can be written using equations (2.11-1) and

!

GRADIENT

operations that can be performed using a vector differential 2

curvature R of the space curve C is simply the radius of the

!

2.12! !

and

! a =a=

(2.11-13)

!

! ∂ ˆ ∂ ˆ ∂ ˆ ∂ ∂ ∂ ∂ ∇= i+ j+ k= eˆ1 + eˆ2 + eˆ3 = eˆi ∂x ∂y ∂z ∂x1 ∂x2 ∂x3 ∂xi

!

(2.12-1)

The gradient operator is a vector differential operator

that operates on a scalar function to determine the rate of

From equations (2.9-12), (2.9-13), (2.11-1), (2.11-2), and (2.11-5),

change of the function with respect to spatial coordinates. For

we also have:

example, let us consider a scalar field that can be described by a

!

κ=

! ! v×a

( v )3

scalar function f ( x, y, z ) , where f is defined and continuously !

(2.11-12)

differentiable with respect to position coordinates for every

point P ( x, y, z ) of the field. Such a scalar field is known as a 95

smooth scalar field. The gradient of the scalar function f is

scalar field f at a given point P as calculated using two

then given by:

different coordinate systems xj and xi′ , then we will have ! ! ∇f = ∇ ′f . ! ! Since f is a scalar function and ∇f is a vector function,

!

! ⎡ ∂ ˆ ∂ ˆ ∂ ˆ⎤ ∇f = ⎢ i+ j+ k f! ∂y ∂z ⎥⎦ ⎣ ∂x

(2.12-2)

or ! ∂f ˆ ∂f ˆ ∂f ˆ ∂f ∇f = i+ j+ k= eˆi ! (2.12-3) ∂x ∂y ∂z ∂xi ! From equation (2.12-3) we see that ∇f is a function of f ( x, y, z ) ! at the point P ( x, y, z ) for which ∇f is being computed. ! ! The gradient operator ∇ has no defined direction or

!

magnitude. Nevertheless its components transform as do point ! vector components, and so the gradient operator ∇ is considered to be a point vector. We can see this by using the ! chain rule to transform the components of ∇ given in equation (2.12-1) from a rectangular coordinate system xj ∂ ∂ xi′

=

∂ ∂xj ∂xj ∂ xi′

=

∂xj



∂ xi′ ∂xj

!

a scalar field is termed a potential vector field, and the scalar field is termed the scalar potential of the corresponding vector field.

! The vector ∇ is an operator and so it requires an operand ! upon which to act. By convention, the ∇ operator only operates

!

on operands following it, and not on those preceding it. ! Application of the gradient operator ∇ must be restricted to ! situations that provide an operand for ∇ . Moreover products ! that include ∇ as a factor are not commutative.

to a

rectangular coordinate system xi′ : !

the operation of a gradient on a scalar field f produces a ! vector field ∇f . A vector field that is defined by the gradient of

(2.12-4)

2.12.1! PHYSICAL INTERPRETATION OF THE GRADIENT !

The derivatives of a scalar field f with respect to each of

the rectangular coordinates x , y , and z in equation (2.12-3)

which is the same as the transformation law for covariant

measure how rapidly the scalar field f changes with a change

vector components of a point vector given in equation (1.15-5).

in each coordinate. We know from differential calculus that, for

The gradient operator is therefore invariant under coordinate ! ! system transformation. If ∇f and ∇ ′f are the gradients of a

an infinitesimal change in position, the infinitesimal change in a scalar field f is given by: 96

!

df =

∂f ∂f ∂f ∂f dx + dy + dz = dx i ! ∂x ∂y ∂z ∂xi

(2.12-5)

We can rewrite this expression for the differential df as: !

⎡∂ f ˆ ∂ f ˆ ∂ f df = ⎢ i+ j+ ∂y ∂z ⎣ ∂x

⎤ kˆ ⎥ • ⎡⎣ dx iˆ + dy ˆj + dz kˆ ⎤⎦ ! (2.12-6) ⎦

Using equation (2.12-3), we then have: ! ! df = ∇f • d r ! ! (2.12-7) ! where d r is the position vector differential. The infinitesimal change df in f resulting from the change in position specified ! ! by d r is, therefore, given by the projection of ∇f in the ! direction of d r . We see then that the gradient of a scalar field is a point vector that describes how the scalar field changes with position at each point.

Points P ( x, y, z ) of the scalar field f that are determined ! by the position vector r : !

!

! r ( s ) = x ( s ) iˆ + y ( s ) ˆj + z ( s ) kˆ !

(2.12-8)

will define a space curve C . The parameter s measures arc distance along the space curve. The change in f resulting from an infinitesimal change in position s along the space curve C can then be written as:

df ⎡ ∂ f ˆ ∂ f ˆ ∂ f ˆ ⎤ ⎡ dx ˆ dy ˆ dz ˆ ⎤ = i+ j+ k • i+ j+ k ! (2.12-10) ds ⎢⎣ ∂x ∂y ∂z ⎥⎦ ⎢⎣ ds ds ds ⎥⎦  The first factor in equation (2.12-10) is simply ∇f as given by equation (2.12-3). The second factor in equation (2.12-10) is a

!

function of the direction of the space curve C . We can write equation (2.12-10) as: ! ! ! df ! d r d r ! ⎛ dr ! ⎞ ! ! = ∇f • = • ∇f = ⎜ • ∇⎟ f ⎝ ds ⎠ ds ds ds or using equation (2.4-7): ! ! df ! = ∇f • Tˆ = Tˆ • ∇f = Tˆ • ∇ f ! ! ds

(

)

(2.12-11)

(2.12-12)

where Tˆ is the unit tangent vector to the space curve C . We note that equation (2.12-12) can also be written as: ! !

! df ! = ∇f • Tˆ = Tˆ • ∇ f ! dr

(

)

(2.12-13)

From equation (2.12-12) we see that df ds , the change in f

due to an infinitesimal change in position s along the space ! curve C (in the direction of Tˆ ) is equal to the projection of ∇f in the direction of the unit tangent vector Tˆ to C . We can generalize equation (2.12-13) by replacing Tˆ with a unit vector cˆ . The derivative of a scalar field f in the direction of a unit vector cˆ is then given by: !

df ∂ f dx ∂ f dy ∂ f dz ∂ f dx ! = + + = ds ∂x ds ∂y ds ∂z ds ∂xi ds i

!

or

(2.12-9)

97

df ! = ∇f • cˆ ! dc

!

(2.12-14)

Therefore, the projection or component of the gradient of a scalar field in a certain direction (defined by a unit vector) equals the derivative of the scalar field in the same direction. ! For this reason ∇f • cˆ is known as the directional derivative of the scalar field f in the direction of cˆ . Since the directional derivative is defined in terms of a scalar product of point vectors, the directional derivative is a scalar. If we let cˆ be a unit normal vector nˆ , equation (2.12-14) becomes: df ! ! = ∇f • nˆ ! dn

!

(2.12-15)

and df dn is a directional derivative.

! df = ∇f cosθ ! dc

2

2

the maximum spatial rate of increase of f , and this spatial rate ! is equal to the magnitude ∇f . The gradient of a scalar field then results in a vector field where the magnitude of the ! vector ∇f at each point of the field represents the maximum rate of change of the scalar field at that point. !

We can also conclude that not only the magnitude, but

independent of any coordinate system since they both represent ! ! physical properties of the scalar field. Any point where ∇f = 0 is called a stationary point of the field.

We can write equation (2.12-14) as:

!

2

also the direction of the gradient of a scalar field must be

where dn is now an infinitesimal change in the direction of nˆ , !

!

! = ∇f =

⎡∂ f ⎤ ⎡∂ f ⎤ ⎡∂ f ⎤ (2.12-17) ⎢⎣ ∂x ⎥⎦ + ⎢ ∂y ⎥ + ⎢⎣ ∂z ⎥⎦ ! ⎣ ⎦ max ! Therefore, the gradient vector ∇f points in the direction of df dc

Example 2-9 Show that the directional derivative of f in the direction of

(2.12-16)

a unit vector cˆ is given by equation (2.12-14) for the case cˆ = ˆj , where ˆj is a base vector for a rectangular coordinate

! where θ is the angle between the directions of ∇f and the unit vector cˆ . The directional dependence of the derivative df dc is

system.

evident from equation (2.12-16). We also can see that the spatial rate of change of the field f will be a maximum when θ = 0 . ! This occurs when cˆ has the same direction as ∇f . We then have:

Solution: !

! ! ⎡ ∂ f ˆ ∂ f ˆ ∂ f ˆ⎤ ˆ ∂ f ∇f • cˆ = ∇f • ˆj = ⎢ i+ j+ k •j= ∂y ∂z ⎥⎦ ∂y ⎣ ∂x

98

The unit vector ˆj is directed along the y-axis and ∂ f ∂y does indeed give the change in f along the y direction.

2.12.2! NORMAL TO A SURFACE !

function:

Example 2-10 Find the maximum rate of change of f ( x, y, z ) = 3x y + y z 2 at the point ( 3, 2, 1) .

Solution: ! ∂f ˆ ∂f ˆ ∂f ˆ ! ∇f = i+ j+ k = 3y iˆ + ( 3x + z 2 ) ˆj + 2 y z kˆ ∂x ∂y ∂z The maximum rate of change of f ( x, y, z ) = 3x y + y z 2 is given by equation (2.12-17): !

! ∇f =

2

2

⎡∂ f ⎤ ⎡∂ f ⎤ ⎡∂ f ⎤ ⎢⎣ ∂x ⎥⎦ + ⎢ ∂y ⎥ + ⎢⎣ ∂z ⎥⎦ ⎣ ⎦

! ∇f =

( 9 y ) + (9 x 2

2

!

f (s) = λ !

(2.12-18)

where λ is a constant. Such a surface having a constant value is ! called an isotimic surface. It is possible to show that ∇f is normal to such a surface. We have for a point P on the surface: !

df ∂ f dx ∂ f dy ∂ f dz = + + =0! ds ∂x ds ∂y ds ∂z ds

(2.12-19)

This equation states that the surface does not change in the ! immediate vicinity of the point P . Letting r ( s ) be the position

vector function for points on the surface f ( s ) = λ , we have from

2

equations (2.12-11), (2.12-12), and (2.12-19): ! df ! d r ! ! = ∇f • = ∇f • Tˆ = 0 ! ds ds

and so: !

We will now consider a surface defined by the scalar

) (

+ 6 x z 2 + z 4 + 4 y2 z 2

)

At point ( 3,2,1) the maximum rate of change of

f ( x, y, z ) = 3x y + y z 2 is then: ! ! ∇f = ( 36 ) + ( 81+ 18 + 1) + (16 ) = 152 ≈ 12.3

(2.12-20)

For any given point P on the surface, the vector Tˆ is tangent to the surface. From equation (2.12-20) we see that, for this same ! point P , ∇f must be orthogonal to Tˆ and, therefore, normal to ! ! ! the surface (since Tˆ ≠ 0 and ∇f ≠ 0 ). By computing the gradient to an isotimic surface then, the normal to the surface can be obtained. The unit normal nˆ to an isotimic surface at a point P is then given by: 99

! ∇f nˆ = ± ! ! ∇f

!

(2.12-21)

2.12.3! GRADIENT OPERATIONS

! The gradient operator ∇ is both a vector and a differential operator. If f and g are scalar functions and λ is a scalar

! The sign in equation (2.12-21) is chosen to give nˆ an outward direction if such a distinction is possible. Example 2-11 Find the unit normal nˆ to the sphere of constant radius a :

( x ) + ( y) + ( z) 2

!

2

2

= (a)

2

Solution: Let: f = ( x ) + ( y) + ( z) 2

!

2

constant, we can then write: ! ! ! ∇ ( λ f ) = λ ∇f !

(2.12-22)

!

! ! ! ∇ ( f + g ) = ∇f + ∇g !

(2.12-23)

!

! ! ! ∇ ( f g ) = f ∇g + g ∇f !

(2.12-24)

!

! ! ! ⎡ f ⎤ g ∇f − f ∇g ! ∇⎢ ⎥ = ⎣g⎦ ( g )2

(2.12-25)

!

! n n−1 ! ∇ ( f ) = n ( f ) ∇f !

(2.12-26)

2

we have for this isotimic surface: ! ! ∇f = 2 x iˆ + 2 y ˆj + 2 z kˆ ! ∇f 2 x iˆ + 2 y ˆj + 2 z kˆ 2 x iˆ + 2 y ˆj + 2 z kˆ nˆ = ! = = ! 2 2 2 2 ∇f 4 (a) 4 ( x) + 4 ( y ) + 4 (z)

Example 2-12 Show that: ! ! ! ! ∇ ( f g ) = f ∇g + g ∇f

or !

x y z nˆ = iˆ + ˆj + kˆ a a a

Solution: !

 ∂( f g ) ˆ ∂( f g ) ˆ ∂( f g ) ˆ ∇( f g) = i+ j+ k ∂x ∂y ∂z

100

 ∂ f ⎤ ˆ ⎡ ∂g ∂ f ⎤ ˆ ⎡ ∂g ∂f ⎤ ˆ ⎡ ∂g ∇( f g) = ⎢ f +g i +⎢f +g j+⎢f +g k ⎥ ⎥ ∂x ⎦ ∂y ⎦ ∂z ⎥⎦ ⎣ ∂x ⎣ ∂z ⎣ ∂y

!

!

both with a change of time and a change of position.

! ⎡ ∂g ∂g ˆ ∂g ˆ ⎤ ⎡ ∂ f ˆ ∂ f ˆ ∂ f ˆ⎤ ∇ ( f g ) = f ⎢ iˆ + j+ k⎥ + g ⎢ i+ j+ k ∂y ∂z ⎦ ∂y ∂z ⎥⎦ ⎣ ∂x ⎣ ∂x

!

!

2.12.4! SUBSTANTIVE DERIVATIVE If we have a function f = f ( x, y, z, t ) where t is time, then

in place of equation (2.12-5), we can write: !

!

df =

∂f ∂f ∂f ∂f dx + dy + dz + dt ! ∂x ∂y ∂z ∂t

df ∂ f ∂ f dx ∂ f dy ∂ f dz ! = + + + dt ∂t ∂x dt ∂y dt ∂z dt

Equation (2.12-28) can be rewritten in the form: ! df ∂f ! d r ∂f ! ! ! = + ∇f • = + v • ∇f ! dt ∂t dt ∂t

(2.12-27)

! ∂g ! ∇g = ∇f ! ∂f

(2.12-31)

2.12.5! GRADIENT OPERATIONS FOR POSITION VECTORS ! n ! ! Important expressions can be derived for ∇r , ∇ ( r ) ,

and ! ! ∇f ( r ) where r is a position vector. From the definition of a

position vector, we have: 1

! (2.12-28)

(2.12-29)

2 2 2 r = ⎡( x ) + ( y ) + ( z ) ⎤ 2 ! ⎣ ⎦

(2.12-32)

and

!

(r )

n 2

= ⎡( x ) + ( y ) + ( z ) ⎤ ! ⎣ ⎦ ! Therefore the expression for ∇r is given by:

!

This derivative is known as the substantive derivative, total derivative, or material derivative Df Dt :

! If a scalar field g ( f ) is a function of f , we can write ∇g

as:

and so: ! ! ! ! ∇ ( f g ) = f ∇g + g ∇f

!

(2.12-30)

The substantive derivative describes the variation that occurs

or !

Df ∂f ! ! = + v • ∇f ! Dt ∂t

n

2

2

2

(2.12-33)

1 ! ⎡ ∂ ˆ ∂ ˆ ∂ ˆ⎤ ⎡ 2 2 2 2 ⎤ ! (2.12-34) ∇r = ⎢ i+ j+ k ⎥ ( x ) + ( y) + ( z) ⎣ ⎦ ∂x ∂y ∂z ⎣ ⎦

or 101

x iˆ + y ˆj + z kˆ

! ∇r =

!

1

! r = = rˆ ! r

(2.12-35)

⎡ ( x )2 + ( y )2 + ( z )2 ⎤ 2 ⎣ ⎦ ! n The expression for ∇ ( r ) is given by: !

n ! n ⎡∂ ∂ ˆ ∂ ˆ⎤ ⎡ 2 2 2 2 ˆ ⎤ ! ∇(r ) = ⎢ i+ j+ k ( x ) + ( y) + ( z) ⎦ (2.12-36) ∂y ∂z ⎥⎦ ⎣ ⎣ ∂x

n ! n 2 2 2 2 −1 ˆ ˆ ˆ ⎡ ⎤ ⎡ ⎤ ! (2.12-37) ∇ (r ) = n ⎣ x i + y j + z k ⎦ ( x ) + ( y) + ( z) ⎣ ⎦

then

! n ! ∇(r ) = n

x iˆ + y ˆj + z kˆ ⎡( x ) + ( y ) + ( z ) ⎤ ⎣ ⎦ 2

2

2

1 2

⎡ ( x )2 + ( y )2 + ( z )2 ⎤ ⎣ ⎦

n−1 2

! (2.12-38)

or

!

For n = −1 , we then have: ! ! ⎡1 ⎤ r rˆ ! ∇⎢ ⎥ = − 3 = − 2 ! ⎣r ⎦ (r ) (r ) ! The expression for ∇f ( r ) is given by:

! ! df ( r ) ! df ( r ) r df ( r ) ∇f ( r ) = ∇r = = rˆ ! dr dr r dr

!

(2.12-42)

Example 2-13 ! ! If D is a constant vector and r is a position vector, show that: ! ! ! ! ! ∇( D • r ) = D Solution:

and so: ! n n−1 ! n−1 n−2 ! ∇ ( r ) = n ( r ) ∇r = n ( r ) rˆ = n ( r ) r !

⎡ ∂ ∂r ˆ ∂ ∂r ˆ ∂ ∂r ˆ ⎤ =⎢ i+ j+ k f ( r ) ! (2.12-41) ∂r ∂y ∂r ∂z ⎥⎦ ⎣ ∂r ∂x

!

or !

! ⎡ ∂ ˆ ∂ ˆ ∂ ˆ⎤ ∇f ( r ) = ⎢ i+ j+ k f (r ) ∂y ∂z ⎥⎦ ⎣ ∂x

!

(2.12-39)

! ! ! ∂ ! ! ∂ ! ! ∂ ! ! ∇ ( D • r ) = ( D • r ) iˆ + ( D • r ) ˆj + ( D • r ) kˆ ∂x ∂y ∂z

!

and so: (2.12-40)

! ! ! ! ! ⎡ ! ∂r! ⎤ ⎡ ! ∂r ⎤ ˆ ⎡ ! ∂r ⎤ ˆ ˆ ∇( D • r ) = ⎢D • ⎥ i + ⎢D • ⎥ j + ⎢D • ⎥ k ∂x ⎦ ∂y ⎦ ∂z ⎦ ⎣ ⎣ ⎣

! or !

! ! ! ! ! ! ∇ ( D • r ) = ⎡⎣ D • iˆ ⎤⎦ iˆ + ⎡⎣ D • ˆj ⎤⎦ ˆj + ⎡⎣ D • kˆ ⎤⎦ kˆ 102

From equation (1.16-23), we then have: ! ! ! ! ∇( D • r ) = D !

Example 2-14 ! ! If c is a constant vector, r is a position vector, and e is the base of natural logarithms, show that: ! !! ! !! ! ∇e c•r = c e c•r

!

! !! ! !! ∇ec•r = c ec•r

Example 2-15 ! ! ! If A and B are a constant vectors and r is a position vector, show that: ! ! ! ! ! ! ! ∇( A • B × r ) = A × B Solution:

Solution:

!

! ! ! ∂ c• ! ! ! ! ! ! ∂ c• ∂ c• ∇ec•r = e r iˆ + e r ˆj + e r kˆ ∂x ∂y ∂z

( )

!

( )

( )

or

equations (1.18-5) and (1.18-8), we have:

! ! !! ! ! ∂ ! ! ! ! ∂ ! ! ∂ ∇ec•r = ec•r ( c • r ) iˆ + ec•r ( c! • r! ) ˆj + ec•r ( c! • r! ) kˆ ∂x ∂y ∂z

!

x2

a3 b3 x3

and so:

! c• ! ! ! ! ⎧ ⎡ ! ∂r! ⎤ ˆ ⎡ ! ∂r! ⎤ ˆ ⎡ ! ∂r! ⎤ ˆ ⎫ c• r ∇e = ⎨ ⎢ c • ⎥ i + ⎢ c • ⎥ j + ⎢ c • ⎥ k ⎬ e r ⎣ ∂z ⎦ ⎭ ⎣ ∂y ⎦ ⎩ ⎣ ∂x ⎦ or

!

a1 a2 ! ! ! A • B × r = b1 b2 x1

and so: !

Writing the vectors in component form and using

{

}

! !! ! ! ! ! ! ∇ec•r = ⎡⎣ c • iˆ ⎤⎦ iˆ + ⎡⎣ c • ˆj ⎤⎦ ˆj + ⎡⎣ c • kˆ ⎤⎦ kˆ ec•r From equation (1.16-23), we then have:

!

a1 a2 ! ! ! ! ! ∇ ( A • B × r ) = ∇ b1 b2 x1

x2

a3 b3 x3

! ! Since the components of A and B are constants, the ! gradient will operate only on components of r , and we obtain: 103

a1 a2

! ! ! ! ∇ ( A • B × r ) = b1 b2 ˆj iˆ



!

b3 = a1 a2 b1 b2 kˆ

a3

unit magnitude and constant direction), we can rewrite



equation (2.13-2) as:

b3

!

or !



P2

! A dt = iˆ

P1

! ! ! ! ! ! ∇( A • B × r ) = A × B

2.13!

Since the base vectors iˆ , ˆj , and kˆ are constant (having

ˆj

a3

!

P1

A vector line integral is the integral of a vector function ! along a curve in space. We will let A be a vector function that

then we can write:

describes a vector field: ! ! A = ax iˆ + ay ˆj + az kˆ !

! (2.13-1)

! A vector line integral of the vector function A along a curve from point P1 to point P2 is: !



P1

! A ds =



P2

P1

⎡ ax iˆ + ay ˆj + az kˆ ⎤ ds ! ⎣ ⎦

ax ds + ˆj



P2

P1

ay ds + kˆ



P2

az ds !

(2.13-2)

where ds is an infinitesimal element of arc length. Vector line integrals generally depend upon the path chosen to go from a

dependent upon only the end points P1 and P2 (see Section



! ! ! dB ds = B + D ! ds

(2.13-4)

(2.13-5)

! where D is a constant vector (not dependent on s ). If λ is a scalar constant, we can write: !



! ! λ A dt = λ A dt !



(2.13-6)

We also have: !

point P1 to a point P2 . Only for a certain kind of vector field is a vector line integral independent of the path chosen, and so



! A ds =

(2.13-3)

P1

! ! If we have two vector functions A and B such that: ! ! dB ! ! A= ds

VECTOR LINE INTEGRALS

P2



P2

!



! ! ! ! D • A ds = D • A ds !

(2.13-7)



! ! ! D × A ds = D ×

(2.13-8)





! A ds !

3.12). 104

! A ds +



P2



P2



! ! A • dr =

!



! ! A • dr =

!

! ! A × dr = iˆ

!

P1

!



!

! A ds +



P2



P3

P1

! A ds =

P2

! ! ⎡⎣ A + B ⎤⎦ ds !

(2.13-9)

(2.13-10)

P1

∫ ∫

dy dz ⎤ ⎡ dx ⎢⎣ ax ds + ay ds + az ds ⎥⎦ ds !

! A • Tˆ ds !

(2.13-11)

∫ (a dz − a dy) + ˆj ∫ (a dx − a dz) + kˆ ( a dy − a dx ) ! ∫ y

z

z

(2.13-12)

x

y

(2.13-13)

VECTOR SURFACE INTEGRALS

! A surface S can be described by a position vector r ( u, v )

that is a function of two parameters u and v : !

! ! ∂r ! ∂r dr = du + dv ! ∂u ∂v

(2.14-2)

Each very small area element ΔS of the surface S has a

direction associated with it. For a closed surface, this direction is taken to be that of the outward pointing normal to ΔS . For an

! dr! A• ds = ds



! !

! A ds !

x

2.14! !



P3

! B ds =

P1

P1

!



P2

! r ( u, v ) = x ( u, v ) iˆ + y ( u, v ) ˆj + z ( u, v ) kˆ !

(2.14-1)

The parameters u and v serve as coordinates that specify all the points of the surface S . Holding one of these parameters constant while letting the other vary over its range produces a coordinate curve on the surface. We now have:

open surface, the direction of ΔS is arbitrary. Since a very small element of surface area then has both a direction and a ! magnitude, it can be represented by a vector Δ S . As ΔS → dS , we can represent the vector differential of  surface area dS in the form of a parallelogram element: ! ! ! ⎛ ∂r! ⎞ ⎛ ∂r! ⎞ ∂r ∂r ! dS = ⎜ du ⎟ × ⎜ dv ⎟ = × du dv ! (2.14-3) ⎝ ∂u ⎠ ⎝ ∂v ⎠ ∂u ∂v  where dS is normal to the surface since it is equal to the cross !

product of two vectors that are tangent to the surface. ! If nˆ is a unit vector normal to the area element ΔS , the ! vector area element Δ S can be written as: ! ! Δ S = nˆ ΔS ! (2.14-4)  and so the vector differential of surface area dS can be written: ! ! dS = nˆ dS ! (2.14-5) !

A surface S is considered to be smooth if the unit normal vector nˆ ( u, v ) varies continuously across S . A surface S is considered to be piecewise smooth or sectionally smooth if it 105

consists of a finite number of smooth surfaces joined along

surface chosen to integrate over. Only for a certain kind of

curves where nˆ changes discontinuously. We will consider only

vector field is a vector surface integral independent of the

those surfaces that are either smooth or piecewise smooth. A

surface chosen, and dependent only upon the boundary of the

surface integral over a piecewise smooth surface can be defined

surface (see Section 3.7.1).

as the sum of integrals over the individual smooth sections of the surface. Note that the surface S will not, in general, be

!

plane. In fact, if the unit vector nˆ varies across S , then S cannot be plane.

become: !

Any surface can be considered to consist of an infinite ! ! number of differential surface areas dS . Integrals over dS are

!

are the double integrals:

∫∫

! f dS =

∫∫

! ! A • dS =

∫∫

! ! A × dS =

S

∫∫ f nˆ dS ! S

S

!

∫∫

! ! A • dS =

∫∫

! ! A × dS =

S

! (2.14-6)

∫∫

! A • nˆ dS !

(2.14-7)

∫∫

! A × nˆ dS !

(2.14-8)

S

S

! where f is a scalar function and A is a vector function. Both functions must be defined for the set of points on the surface S . ! The integrand A • nˆ in equation (2.14-7) represents the normal ! component of A . This normal component is integrated over the

∫∫

S

S

vector surface integrals. Examples of vector surface integrals

!

∫∫

! f dS =

S

!

!

In parametric form, equations (2.14-6) through (2.14-8)

S

!

! ! ∂r ∂r f × du dv ! ∂u ∂v

(2.14-9)

! ∂r! ∂r! A• × du dv ! ∂u ∂v S

(2.14-10)

! ∂r! ∂r! A× × du dv ! ∂u ∂v S

(2.14-11)

∫∫

∫∫

If a surface S is isotimic so that it can be described by a

scalar function f that is equal to a scalar constant λ for a given time as in equation (2.12-18), then the normal to S is given by ! ∇f . In such cases, we have from equation (2.12-21): ! ∇f ! nˆ = ± ! ! (2.14-12) ∇f where the sign depends on the direction of the outward normal. As an example, if we have a surface S specified in explicit form by:

surface S . Vector surface integrals generally depend upon the 106

!

x = g ( y, z ) !

(2.14-13)

2.15!

which can be rewritten in implicit form as: !

f = x − g ( y, z ) !

(2.14-14)

Then we can write: ! ∂g ˆ ∂g ˆ ∇f = iˆ − j− k! ! ∂y ∂z

! ∇f nˆ = ± ! = ± ∇f

∂g ˆ ∂g ˆ iˆ − j− k ∂y ∂z

coordinate system is given by:

dV = dx dy dz !

(2.15-1)

The volume differential does not have a direction associated

2

⎡ ∂g ⎤ ⎡ ∂g ⎤ 1+ ⎢ ⎥ + ⎢ ⎥ ⎣ ∂y ⎦ ⎣ ∂z ⎦

curvilinear coordinate systems is given in Section 8.7.2. !

(2.14-16)

2

y = h ( x, z ) !

!

Vector volume integrals have the form of triple integrals:

We have then: (2.14-17)

!

∫∫∫

V

we obtain:



!



!

! ∇f nˆ = ± ! = ± ∇f

∂h ˆ ˆ ∂h ˆ i + j− k ∂x ∂z 2

⎡ ∂h ⎤ ⎡ ∂h ⎤ + 1+ ⎢⎣ ∂x ⎥⎦ ⎢⎣ ∂z ⎥⎦

2

∂m ˆ ∂m ˆ ˆ i− j+k ∂x ∂y 2

2

(2.15-2)

V

z = m ( x, y ) !

! ∇f nˆ = ± ! = ± ∇f

! A dV !

∫∫∫

!

Similarly, if we have surfaces specified by: !

in the rectangular

with it. Representation of the differential of volume dV in

and so: !

The differential of volume dV

! !

(2.14-15)

VECTOR VOLUME INTEGRALS

⎡ ∂m ⎤ ⎡ ∂m ⎤ ⎢⎣ ∂x ⎥⎦ + ⎢⎣ ∂y ⎥⎦ + 1

!

! A dV = iˆ

∫∫∫ a dV + ˆj ∫∫∫ a dV + kˆ ∫∫∫ a dV ! (2.15-3) x

V

y

V

z

V

(2.14-18)

!

(2.14-19)

107

Chapter 3 Vector Field Methods

∫∫∫

V

! ! ∇ • B dV =

∫∫

! ! B • dS

S

108

!

Many ingenious mathematical tools have been developed

for solving problems involving physical vector fields. We will present some of the more important of these tools in this chapter using the rectangular coordinate system. Included in

!

! Equation (3.1-3) is termed the divergence of B and can be

calculated for every point P ( x, y, z ) of the vector field that is ! represented by the vector function B . Equation (3.1-3) can then

our discussions will be gradient operator functions and vector

be considered to be the divergence of that part of the vector ! field described by the vector function B . From equation (3.1-3)

integral theorems. We will review vector field classifications

we can see that the divergence is a vector differential operation

and vector potentials.

on a vector function that yields a scalar function. By calculating

3.1!

the divergence, therefore, we obtain a scalar field from a vector

DIVERGENCE

!

We will now consider a vector field that can be described ! by a vector function B for a set of points: ! ! B = bx iˆ + by ˆj + bz kˆ ! (3.1-1) ! We will assume that B is single-valued and continuously

differentiable for every point P ( x, y, z ) of the vector field. We ! can then use the definition of the gradient operator ∇ given in equation (2.12-1) to write the following expression in the rectangular coordinate system: !

! ! ⎡∂ ∂ ˆ ∂ ∇ • B = ⎢ iˆ + j+ ∂y ∂z ⎣ ∂x

⎤ kˆ ⎥ • ⎡⎣bx iˆ + by ˆj + bz kˆ ⎤⎦ ! (3.1-2) ⎦

or !

! ! ∂bx ∂by ∂bz ! ∇• B = + + ∂x ∂y ∂z

field. The divergence is often written using the notation: ! ! ! ! ∇ • B ≡ div B ! (3.1-4)

3.1.1! !

PHYSICAL INTERPRETATION OF THE DIVERGENCE

The physical significance of the divergence will now be

explained using an integral that is equivalent to the definition of divergence given in equation (3.1-3). This equivalence will be shown in the next section. For the discussions to follow, it will be helpful to think in terms of the vector field that the vector ! function B describes. Therefore we will refer to the divergence ! of a vector field B rather than the divergence of a vector ! ! ! function B when considering ∇ • B . !

(3.1-3)

The integral definition of the divergence of a vector field ! B at a point P ( x, y, z ) is: 109

!

! ! ∇ • B = lim

ΔV→ 0

1 ΔV

∫∫

! B • nˆ dS !

(3.1-5)

S

where ΔV is a volume element of a very small region of the ! vector field B that contains the point P ( x, y, z ) and that is bounded by the closed surface S . The unit normal vector nˆ is

! where B • nˆ equals the rate of flow of the fluid (quantity per unit time per unit area) through the area dS . The integral in equation (3.1-5) then represents the sum of the flow of the fluid per unit time through all the differential areas of the surface S

directed outward from a differential surface area dS of the

that enclose the volume element ΔV . The righthand side of ! equation (3.1-5) is known as the volume flux of B through the

volume element ΔV . From equation (3.1-5) we see that the

surface S . The divergence then equals the net flux through a

divergence is independent of the shape of the volume element

surface per unit volume (quantity of fluid flowing per unit

ΔV that contains the point P ( x, y, z ) . The volume element ΔV must, however, continue to contain the point P ( x, y, z ) as ! ΔV → 0 . We then obtain the divergence of the vector field B at ! ! the point P ( x, y, z ) . The divergence ∇ • B can vary point by ! point throughout the vector field B . The integral definition of

time per unit volume) at the point for which it is calculated. If the divergence is positive, the fluid flow is diverging (expanding) away from a source of the vector field (hence the name divergence). If the divergence is negative, the fluid flow is converging into a sink of the vector field. The divergence of

the divergence of a vector field is independent of the choice of

a vector field provides a measure of the strength of the source

coordinate system.

or sink of the field. If the divergence is zero, then either the

! ! The integrand B • nˆ in equation (3.1-5) is the component of ! the vector field B normal to the differential area dS . If we think ! of the field B as representing the momentum density of a fluid ! having a density ρ and flowing with a velocity υ , we can

inflow of fluid is exactly balanced by the outflow of fluid or else there is no fluid flow.

3.1.2!

write:

! ! B = ρυ ! ! The vector B is then a flux vector. We can write: ! ! ! B • nˆ = ρ υ • nˆ !

!

(3.1-6)

!

EQUIVALENCE OF DEFINITIONS OF THE DIVERGENCE

To show that the integral definition of the divergence

given in equation (3.1-5) is equivalent to the definition given in equation (3.1-3), we will examine the rate of outward flow of a

(3.1-7) 110

! vector field B from a volume element ΔV . We will use the very

! areas. This is the ˆj component. Note that a component of B

small volume element ΔV shown in Figure 3-1 where:

that is tangential to a surface area element does not pass

!

ΔV = Δx Δy Δz !

(3.1-8)

through the area element. Since we will be letting ΔV → 0 , we can use just the first two terms of a Taylor series expansion to ! approximate the component of B flowing through the Sy+ area element: !

∂by Δy ⎤ ˆ ⎡ ⎢by + ∂y 2 ⎥ j ! ⎣ ⎦

(3.1-9)

and through the Sy− area element: !

∂by Δy ⎤ ˆ ⎡ ⎢by − ∂y 2 ⎥ j ! ⎣ ⎦

(3.1-10)

The next term of the Taylor series expansions involves ( Δy )

2

and so can be disregarded. The area elements Sy+ and Sy− have outward directions (normals) of + ˆj and − ˆj , respectively, and magnitudes of Δx Δz . ! Therefore the flow rate of the field B through these areas !

Figure 3-1!

!

A very small volume element ΔV with the by ˆj ! component of the vector field B at the point P ( x, y, z ) shown.

consists of the terms: !

! If we examine the outward flow rate of B through the area

elements Sy+ and Sy− of the very small volume element ΔV , we ! need consider only the component of B normal to these two

!

! ∂by Δy ⎤ ˆ ⎫ ⎧⎡ ˆ B • nˆ dS = ⎨ ⎢by + ⎥ j ⎬ • + j Δx Δz ! (3.1-11) + 2 ∂y Sy ⎦ ⎭ ⎩⎣

{

∫∫

}

! ∂by Δy ⎤ ˆ ⎫ ⎧⎡ ˆ B • nˆ dS = ⎨ ⎢by − ⎥ j ⎬ • − j Δx Δz ! (3.1-12) − ∂y 2 Sy ⎦ ⎭ ⎩⎣

∫∫

{

}

111

! The total outward flow rate of B through ΔV parallel to the y -

3.1.3!

axis is then given by the sum:

!

!

∫∫

! B • nˆ dS =

Sy

∫∫

! B • nˆ dS +

Sy+

∫∫

! B • nˆ dS !

Sy−

(3.1-13)

or !

∂by ∂by ! B • nˆ dS = Δx Δy Δz = ΔV ! ∂y ∂y Sy

∫∫

Following the rules of vector and differential operations,

we can write the divergence operations: ! ! ! ! ! ! ! ∇ • ( A + B) = ∇ • A + ∇ • B ! !

Example 3-1 ! ! ! ! ! ! Show that ∇ • ( f A ) = A • ∇f + f ∇ • A .

the x and z directions, and by summing all contributions, we have finally the total outward flow through ΔV : !

∫∫

Solution: (3.1-15)

! Letting ΔV → 0 , we have the divergence of the vector field B at point P ( x, y, z ) : !

lim

ΔV→ 0

1 ΔV

! ! ⎡∂ ∂ ˆ ∂ ∇ • ( f A ) = ⎢ iˆ + j+ ∂y ∂z ⎣ ∂x

!

∫∫

( )

(3.1-16)

volume element ΔV approaches zero, it continues to contain ! ! the point P ( x, y, z ) at which ∇ • B is to be evaluated.

∂ay ∂ax ∂az ∂f ∂f ∂f +f + ay +f + az +f ∂x ∂x ∂y ∂y ∂z ∂z

!

= ax

!

⎡∂ f ˆ ∂ f ˆ ∂ f = ⎡⎣ ax iˆ + ay ˆj + az kˆ ⎤⎦ • ⎢ i+ j+ ∂y ∂z ⎣ ∂x

and so the expressions for the divergence given in equations (3.1-3) and (3.1-5) are equivalent. We note again that as the

⎤ kˆ ⎥ • ⎡⎣ f ax iˆ + f ay ˆj + f az kˆ ⎤⎦ ⎦

! ! ∂( f ax ) ∂ f ay ∂( f az ) ∇ • ( f A) = + + ∂x ∂y ∂z

!

! ∂bx ∂by ∂bz ! ! B • nˆ dS = + + = ∇• B! ∂x ∂y ∂z S

(3.1-18)

! ! where A and B are vector functions and f is a scalar function.

From a similar consideration of the outward rate of flow along

! ⎡ ∂bx ∂by ∂bz ⎤ B • nˆ dS = ⎢ + + ⎥ ΔV ! S ⎣ ∂x ∂y ∂z ⎦

(3.1-17)

! ! ! ! ! ! ∇ • ( f A ) = A • ∇f + f ∇ • A !

! (3.1-14)

DIVERGENCE OPERATIONS

!

⎤ kˆ ⎥ ⎦

⎡∂ ∂ ˆ ∂ ˆ⎤ + f ⎢ iˆ + j + k ⎥ • ⎡⎣ ax iˆ + ay ˆj + az kˆ ⎤⎦ ∂x ∂y ∂z ⎦ ⎣ 112

! ! ! If we have A • ∇ B , then as in equation (3.1-19): ! ! ! ! ! ! A • ∇ B ≠ ∇ • A B! (3.1-23)

(

)

and so: ! ! ! ! ! ! ∇ • ( f A ) = A • ∇f + f ∇ • A !

!

!

However, we can write:

The scalar product is not commutative if one of the vectors ! ! in the scalar product is ∇ since ∇ is an operator: ! ! ! ! ! (3.1-19) ∇• A ≠ A•∇ ! ! The left side of equation (3.1-19) is the divergence of A , while the right side is an operator. For example, if f is a scalar we have: !

(

! ! ⎡∂ ∂ ˆ ∂ ˆ⎤ A • ∇ f = ⎡⎣ ax iˆ + ay ˆj + az kˆ ⎤⎦ • ⎢ iˆ + j+ k f ∂y ∂z ⎥⎦ ⎣ ∂x

)

⎡ ∂ ∂ ∂⎤ ∂f ∂f ∂f = ⎢ ax + ay + az ⎥ f = ax + ay + az ∂y ∂z ⎦ ∂x ∂y ∂z ⎣ ∂x ! (3.1-20)

! ! but !

(

! ! ⎡∂ ∂ ˆ ∂ ˆ⎤ ∇ • A f = ⎢ iˆ + j+ k ⎥ • ⎡⎣ ax iˆ + ay ˆj + az kˆ ⎤⎦ f ∂x ∂y ∂z ⎣ ⎦

)

⎡ ∂a ∂ay ∂az ⎤ =⎢ x + + f! ∂y ∂z ⎥⎦ ⎣ ∂x

!

(3.1-21)

and so: !

! !

! !

!

!

( A • ∇ ) f = A • ∇f ≠ (∇ • A) f !

(3.1-22)

(

(

!

) (

)

! ! ! ⎡ ∂ ∂ ∂⎤ ! A • ∇ B = ⎢ ax + ay + az ⎥ B ! ∂y ∂z ⎦ ⎣ ∂x

)

(3.1-24)

or

! ! ! ! ! ! ∂B ∂B ∂B ! ! A • ∇ B = ax + ay + az (3.1-25) ∂x ∂y ∂z ! ! ! Note that A • ∇ is a scalar function operating on the vector B . ! ! If the vector field A is a function of a scalar field f so that ! ! A = A ( f ) , then we have: ! ! ! dA ! ! ∇ • A( f ) = • ∇f ! (3.1-26) df

(

)

as is shown in Example 3-2. Example 3-2

! If a vector field A is a function of a scalar field f , show that: ! ! ! dA ! ! ∇ • A( f ) = • ∇f df Solution: Using equation (1.16-23), we have: 113

! ! ⎡∂ ∂ ˆ ∂ ˆ⎤ ⎡ ! ˆ ˆ ! ˆ ˆ ! ˆ ˆ⎤ ∇ • A ( f ) = ⎢ iˆ + j + k⎥• A•i i + A• j j + A• k k ⎦ ∂y ∂z ⎦ ⎣ ⎣ ∂x

(

!

) (

) (

)

or

) (

! ! ! ∂ A • iˆ ∂ A • ˆj ∂ A • kˆ ! ! ∇ • A( f ) = + + ∂x ∂y ∂z

(

!

) (

)

!

⎤ kˆ ⎥ ⎦

!

! ! ! dA ! ∇ • A( f ) = • ∇f df

3.1.4! !

! !

DIVERGENCE OF POSITION VECTORS

! We can calculate the divergence of the position vectors r

or

! ! ⎡∂ ∂ ˆ ∂ ∇ • r = ⎢ iˆ + j+ ∂y ∂z ⎣ ∂x

⎤ kˆ ⎥ • ⎡⎣ x iˆ + y ˆj + z kˆ ⎤⎦ ! ⎦

! 2 ∇ • rˆ = ! r Using equation (2.12-7), we can also write: ! ! ! ! ! ! d r • ∇ f = d r • ∇f = ∇f • d r = df !

(

)

Other useful identities are: ! ! ! ! A• ∇ r = A! !

(

(3.1-30)

(3.1-31)

(3.1-32)

(3.1-33)

)

(3.1-34)

!

! ! ! ∇• A × r = 0!

(3.1-35)

!

! n! n ∇ • ⎡( r ) r ⎤ = ( n + 3) ( r ) ! ⎣ ⎦

(3.1-36)

and rˆ : !

(3.1-29)

or

Therefore: !

! ! dr rˆ • ∇r = ∇r • rˆ = = 1! dr

and so equation (3.1-29) becomes: ! ! 1+ r ∇ • rˆ = 3 !

or !

(3.1-28)

From equation (2.12-14), we have:

! ! ! ! ! dA ∂ f ˆ dA ∂ f ˆ dA ∂ f ˆ ∇ • A( f ) = •i + •j+ •k df ∂x df ∂y df ∂z    dA ⎡ ∂ f ˆ ∂ f ˆ ∂ f ∇ • A( f ) = • i+ j+ df ⎢⎣ ∂x ∂y ∂z

! ! ∂x ∂y ∂z ∇•r = + + = 3! ∂x ∂y ∂z

Using equations (3.1-18) and (3.1-28), we can write: ! ! ! ! ! ! ∇ • r = ∇ • ( r rˆ ) = rˆ • ∇r + r ∇ • rˆ = 3 !

and so: !

!

(3.1-27)

114

!

! n ! n−2 ! ∇ • ⎡( r ) A ⎤ = n ( r ) ( r • A ) ! ⎣ ⎦

!

! df ( r ) ! ∇ • ⎡⎣ f ( r ) r ⎤⎦ = r + 3 f (r ) ! dr

(3.1-37)

!

(

Example 3-3 ! If r is a position vector, compare the following expressions: ! ! a.! A • ∇ r b.!

!

)

Using equation (2.12-35), we then have:    A • ∇ r = A • rˆ

(

!

)

coordinates are mutually independent, we can write: ! ! ! ! ! ! ∂r ∂r ∂r ∂x ˆ ∂y ˆ ∂z ˆ A • ∇ r = ax + ay + az = ax i + ay j + az k ∂x ∂y ∂z ∂x ∂y ∂z

(

)

= ax iˆ + ay ˆj + az kˆ

! !

(

(

)

)

! df ( r ) ! ∇ • ⎡⎣ f ( r ) r ⎤⎦ = r + 3 f (r ) dr

Using equation (3.1-18), we have: ! ! ! ! ! ! ! ∇ • ⎡⎣ f ( r ) r ⎤⎦ = r • ∇f ( r ) + f ( r ) ∇ • r Using equations (2.12-31) and (3.1-29), we have: !

and so:

! ! ! df ( r ) ! ∇ • ⎡⎣ f ( r ) r ⎤⎦ = r • ∇r + 3 f ( r ) dr Using equation (2.12-35), we have:

b.! From equation (3.1-25) and the fact that rectangular

!

)

Solution:

a.! From equation (3.1-22) we have: ! ! ! ! ! A • ∇ r = A • ∇r !

)

(

Example 3-4 ! If r is a position vector, show that:

Solution:

(

!

! ! ! ! ! Comparing the expressions A • ∇ r and A • ∇ r , we see that    ! ! ! ! we have a scalar A • ∇ r = A • rˆ and a vector A • ∇ r = A .

(3.1-38)

( ) ! ! ( A • ∇ ) r!

! !

( A • ∇ ) r! = A

!

! ! ! df ( r ) ˆ ∇ • ⎡⎣ f ( r ) r ⎤⎦ = r • r + 3 f (r ) dr and so:

!

! df ( r ) ! ∇ • ⎡⎣ f ( r ) r ⎤⎦ = r + 3 f (r ) dr 115

3.2! Example 3-5 ! If r is a position vector, show that: ! n! n ∇ • ⎡( r ) r ⎤ = ( n + 3) ( r ) ! ⎣ ⎦

!

We will again consider a set of points of a vector field that ! ! can be described by a vector function B . If B is defined, singlevalued, and continuously differentiable for every point

P ( x, y, z ) of the set, then we can calculate the vector product:

Solution: n ⎫⎪ ! ! ⎧⎪ n! 2 2 2 2 ∇ • ⎡( r ) r ⎤ = ∇ • ⎨ ⎡( x ) + ( y ) + ( z ) ⎤ ⎡⎣ x iˆ + y ˆj + z kˆ ⎤⎦ ⎬ ⎣ ⎦ ⎣ ⎦ ⎩⎪ ⎭⎪

!

CURL

! ! ⎡ ∂bz ∂by ⎤ ⎡ ∂bx ∂bz ⎤ ˆ ⎡ ∂by ∂bx ⎤ ˆ − iˆ + ⎢ − j+⎢ − ! ∇× B= ⎢ ⎥ ⎥ k ! (3.2-1) ⎣ ∂z ∂x ⎥⎦ ⎣ ∂y ∂z ⎦ ⎣ ∂x ∂y ⎦

This vector product can be symbolically represented as:

and so using equation (3.1-29): n ! n n! 2 2 2 2 −1 ⎡ ⎤ ⎡ ⎤ ∇ • (r ) r = 2 x ( x ) + ( y) + ( z) x ⎣ ⎦ 2 ⎣ ⎦

!

! n

!

−1 n 2 2 2 + 2 y ⎡( x ) + ( y ) + ( z ) ⎤ 2 y ⎣ ⎦ 2

!

−1 n 2 2 2 + 2 z ⎡( x ) + ( y ) + ( z ) ⎤ 2 z ⎣ ⎦ 2

n

n

2 2 2 + 3 ⎡( x ) + ( y ) + ( z ) ⎤ 2 ⎣ ⎦

! or !

! n! 2 n−2 n n n ∇ • ⎡( r ) r ⎤ = n ( r ) ( r ) + 3 ( r ) = n ( r ) + 3 ( r ) ⎣ ⎦

and we have: ! n! n ! ∇ • ⎡( r ) r ⎤ = ( n + 3) ( r ) ⎣ ⎦

! ! ∇× B=





ˆj

∂ ∂x

∂ ∂y

∂ ! ∂z

bx

by

bz

(3.2-2)

where the operators in the second row of the determinant are understood to operate on terms in the third row.

! Equation (3.2-1) is termed the curl of B and can be calculated for every point P ( x, y, z ) of the vector field that is ! represented by the vector function B . Therefore equation (3.2-1) !

can be considered to be the curl of that part of the vector field ! described by the vector function B . Common notations for the ! ! ! ! ! curl of B are: ∇ × B , curl B , and rot B . ! ! ! In indicial notation ∇ × B can be written for a rectangular coordinate system as: 116

! ! ∂bj ! ∂ ! ! ∂bj ! ! ∇× B= ei × bj ej = ei × ej = ε i j k ek ! ∂xi ∂xi ∂xi

!

(3.2-3)

where we have used equation (1.17-40).

3.2.1!

PHYSICAL INTERPRETATION OF THE CURL

to be evaluated. The integral definition of the curl of a vector field is independent of the choice of coordinate system. ! ! ! In equation (3.2-4) B • d r is the projection of the vector ! ! field B onto d r , which is tangent to the curve C at a point ! P ( x, y, z ) . If we think of the field B as representing the

given in equation (3.2-1). This equivalence will be shown in the

momentum density of a fluid having a density ρ and flowing ! ! ! ! with a velocity υ , we can write B = ρ υ where the vector B is   then a flux vector. We can then consider B • d r to equal the rate

next section.

of flow of a fluid along the curve C (quantity per unit time per

!

unit length). For this reason the integral:

!

The physical significance of the curl will now be explained

using an integral that is equivalent to the definition of the curl

! The integral definition of the curl of a vector field B at a

point P ( x, y, z ) is: !

!

! ! 1 nˆ • ∇ × B = lim ΔS→ 0 ΔS

(

)

!∫

! ! B • dr !

Γ=

!∫

C

(3.2-4)

C

where ΔS is very small element of area that contains the point

P ( x, y, z ) and that is bounded by the simple closed curve C .

The unit normal vector nˆ to the area element ΔS is in the direction prescribed by the righthand rule where the fingers indicate the sense of circulation along C . The normal ! ! component of ∇ × B is equal to the limit as ΔS → 0 of the contour integral in equation (3.2-4). From this equation we see that the curl is independent of the shape of the area element

ΔS . The only requirement is that as ΔS → 0 , it must continue to ! ! contain the point P ( x, y, z ) at which the vector product ∇ × B is

! ! B • dr !

(3.2-5)

is considered to be the circulation of the fluid around the closed curve C . The integral definition of the curl given in equation (3.2-4) can then be understood to represent the ! circulation per unit area of the vector field B along the closed ! curve C bounding ΔS as ΔS → 0 . Consequently, curl B is a ! measure of the rotation or vorticity of the vector field B about an axis perpendicular to an infinitesimally small area enclosed by C that contains the point P ( x, y, z ) at which the curl is being

calculated. The curl of a vector field is a measure of the rotation of the field in the immediate vicinity of the point for which the curl is calculated. The direction of rotation of the field determines the sign of the curl (clockwise is positive, 117

counterclockwise is negative). The curl calculated at a given point of a vector field does not provide information concerning the rotation of the field as a whole.

3.2.2! !

EQUIVALENCE OF DEFINITIONS OF THE CURL

To show that the integral definition of curl given in

equation (3.2-4) is equivalent to the definition given in equation (3.2-1), we will now consider only the iˆ component of the curl. We will do this by using an area element ΔSyz = Δ y Δ z having a normal in the iˆ direction and bound by a curve cyz (see Figure 3-2). Since we will be letting the ΔS → 0 , the line integral of the ! ! scalar product B • d r for each side of the rectangle can then be represented using only the first two terms of a Taylor series ! expansion. The very small change Δ r in the position vector

Small rectangular area ΔSyz = Δ y Δ z bounded by ! the curve cyz . The vector field B is specified at the

Figure 3-2!

when going from point P ( x, y, z ) to a point on the curve cyz is

point P ( x, y, z ) . The curve cyz is composed of four segments: cyz = c1 + c2 + c3 + c4 .

given by: !

! Δ r = Δx iˆ + Δy ˆj + Δz kˆ !

(3.2-6)

! ! As ΔS → 0 , Δ r becomes the differential d r . For the integral given in equation (3.2-4), the very small change in the vector ! field B from the point P ( x, y, z ) to the curve cyz along the sides of ΔS is used in the calculations:

! !

∂by Δz ⎤ ⎫ ! ! ⎧ ⎡ ˆ ˆ B • dr = ⎨+ ⎢by − ⎥ j ⎬ • ⎡⎣ + ( Δy ) j ⎤⎦ ∂z 2 c1 ⎦ ⎭ ⎩ ⎣



∂by Δz ⎤ ⎡ = ⎢by − ⎥ ( Δy ) ! (3.2-7) ∂z 2 ⎣ ⎦

118

! ! ⎧ ⎡ ∂b Δ y ⎤ ˆ ⎫ ˆ B • dr = ⎨+ ⎢bz + z ⎥ k ⎬ • ⎡⎣ + ( Δ z ) k ⎤⎦ ∂y 2 c2 ⎦ ⎭ ⎩ ⎣



!

∂b Δ y ⎤ ⎡ = ⎢bz + z (Δ z) ! ∂y 2 ⎥⎦ ⎣

!

From equation (3.2-12), we see that for the i th component of ! curl B the expressions given in equations (3.2-1) and (3.2-4) are equivalent. Equivalence can also be shown for the ˆj and kˆ (3.2-8)

components by a similar procedure. !

∂by Δz ⎤ ⎫ ! ! ⎧ ⎡ ˆ ˆ B • dr = ⎨+ ⎢by + ⎥ j ⎬ • ⎡⎣ − ( Δy ) j ⎤⎦ ∂z 2 c3 ⎦ ⎭ ⎩ ⎣



!

∂by Δz ⎤ ⎡ = ⎢by + ( −Δy ) ! ∂z 2 ⎥⎦ ⎣

!

(

(3.2-9)

∂b Δ y ⎤ ⎡ = ⎢bz − z ( − Δ z ) ! (3.2-10) ∂y 2 ⎥⎦ ⎣

! ! ⎡ ∂bz ∂by ⎤ ⎡ ∂bz ∂by ⎤ B • dr = ⎢ − Δy Δz = ⎢ − ⎥ ⎥ ΔSyz ! (3.2-11) cyz ⎣ ∂y ∂z ⎦ ⎣ ∂y ∂z ⎦

"∫

and letting ΔS → 0 , we have ΔSyz = Δ y Δ z → 0 and so for the iˆ component: !

⎡ ∂bz ∂by ⎤ 1 lim ⎢ ∂y − ∂z ⎥ = ΔS→0 ΔSyz ⎣ ⎦

!∫

! ! B • dr !

cyz

max

can then be obtained by orienting an area element ΔS at

Summing the terms from each side of the rectangle, we have: !

)

be equal to one. The curl of a vector field at a point P ( x, y, z )



!

(

)

! ! Therefore the cosine of the angle θ between nˆ and ∇ × B must

! ! ⎧ ⎡ ∂b Δ y ⎤ ˆ ⎫ ˆ B • dr = ⎨+ ⎢bz − z ⎥ k ⎬ • ⎡⎣ − ( Δ z ) k ⎤⎦ ∂y 2 c4 ⎦ ⎭ ⎩ ⎣

!

From equations (3.2-12), (3.2-4), and (3.2-1), we see that ! curl B corresponds to an orientation of the area element ΔS ! ! such that the scalar product nˆ • ∇ × B is a maximum: ! ! ! ! ! ! nˆ • ∇ × B = nˆ ∇ × B cosθ max = ∇ × B ! ! (3.2-13)

(3.2-12)

P ( x, y, z ) such that, in equation (3.2-4), the line integral along C around ΔS is a maximum.

3.2.3! !

CURL OPERATIONS

Following the rules of vector and differential operations,

we can write: ! ! ! ! ! ! ! ! ∇ × ( A + B) = ∇ × A + ∇ × B ! ! ! where A and B are vector functions. !

The

vector

product

is

(3.2-14)

not

commutative

(or

anticommutative) if one of the vectors in the product is the ! operator ∇ : 119

    ∇ × A ≠ − A × ∇!

!

(3.2-15) ! The left side of equation (3.2-15) is the curl A , while the right side is an operator. For example, if f is a scalar function we have for the right side: !

(

! ! ⎧⎡ ∂ ∂⎤ ∂ ⎤ ⎡ ∂ A × ∇ f = ⎨ ⎢ ay − az ⎥ iˆ − ⎢ ax − az ⎥ ˆj ∂y ⎦ ∂x ⎦ ⎣ ∂z ⎩ ⎣ ∂z

)

⎡ ∂ ∂ ⎤ ⎫ + ⎢ ax − ay ⎥ kˆ ⎬ f ! ∂x ⎦ ⎭ ⎣ ∂y

!

as shown in Example 3-7. Example 3-6 Show that: ! ! ! ! ! ! ! ∇ × ( f A ) = ∇f × A + f ∇ × A Solution:

(3.2-16)

! ! ∇ × ( f A) =

!

⎡ ∂f ∂f ⎤ ˆ ⎡ ∂f ∂f ⎤ ˆ = ⎢ ay − az i − ⎢ ax − az j ⎥ ∂y ⎦ ∂x ⎥⎦ ⎣ ∂z ⎣ ∂z

!

⎡ ∂f ∂f ⎤ ˆ + ⎢ ax − ay k ! (3.2-17) ∂x ⎥⎦ ⎣ ∂y

!

! ! !

(

)

If f is a scalar function, we have: ! ! ! ! ! ! ∇ × ( f A ) = ∇f × A + f ∇ × A !



∂ ∂x

∂ ∂y

∂ ∂z

f ax

f ay

f az

( ) ⎤⎥ iˆ − ⎡ ∂( f a ) − ∂( f a ) ⎤ ˆj ⎥⎦

⎢ ⎣ ∂x

z

x

∂z

⎥ ⎦

( )

⎡ ∂ f ay ∂( f ax ) ⎤ ⎥ kˆ +⎢ − ∂x ∂y ⎢⎣ ⎥⎦

!

(3.2-18)

ˆj

⎡ ∂( f az ) ∂ f ay =⎢ − ∂z ⎢⎣ ∂y

!

and so:

! ! ! ! A × ∇ f = A × ∇f !



or (3.2-19)

! as shown in Example 3-6. If the vector field A is a function of a   scalar field f so that A = A ( f ) , we have: ! ! ! ! dA ! ! ∇ × A ( f ) = ∇f × (3.2-20) df

! ! ⎡∂ f ∂ f ⎤ ˆ ⎡∂ f ∂ f ⎤ ˆ ⎡∂ f ∂f ⎤ ˆ ! ∇ × ( f A ) = ⎢ az − ay ⎥ i − ⎢ az − ax ⎥ j + ⎢ ay − ax k ∂z ⎦ ∂z ⎦ ∂y ⎥⎦ ⎣ ∂x ⎣ ∂y ⎣ ∂x ⎡ ∂a ∂ay ⎤ ˆ ⎡ ∂ay ∂ax ⎤ ˆ ⎡ ∂a ∂a ⎤ +f ⎢ z − i − f ⎢ z − x ⎥ ˆj + f ⎢ − k ⎥ ∂z ⎦ ∂z ⎦ ∂y ⎥⎦ ⎣ ∂x ⎣ ∂y ⎣ ∂x

! and so:

120

! ! ! ! ! ! ∇ × ( f A ) = ∇f × A + f ∇ × A

!

Example 3-7

! If the vector field A is a function of a scalar field f , show

Therefore, reordering terms: ! ! ! ! ⎡ dA ˆ ⎤ ⎡ ∂ f ˆ ∂ f ˆ ⎤ ⎡ dA ˆ ⎤ ⎡ ∂ f ˆ ∂ f ˆ ⎤ ∇ × A( f ) = ⎢ • i ⎥ ⎢ j− k − • j⎥⎢ i − k ! ∂y ⎥⎦ ⎢⎣ df ∂x ⎥⎦ ⎣ df ⎦ ⎣ ∂z ⎦ ⎣ ∂z ! ⎡ dA ˆ ⎤ ⎡ ∂ f ˆ ∂ f ˆ ⎤ ! + ⎢ • k⎥⎢ i − j ∂x ⎥⎦ ⎣ df ⎦ ⎣ ∂y

that:

! ! ! ! dA ∇ × A ( f ) = ∇f × df

!

! ! ! ! ! ! ⎡ dA ∂ f ˆ dA ∂ f ˆ ⎤ ˆ ⎡ dA ∂ f ˆ dA ∂ f ⎤ ˆ ! ∇ × A( f ) = ⎢ •k− •j i−⎢ •k− •i j df ∂z ⎥⎦ df ∂z ⎥⎦ ⎣ df ∂y ⎣ df ∂x ! ! ⎡ dA ∂ f ˆ dA ∂ f ⎤ ˆ ! +⎢ •j− •i k df ∂y ⎥⎦ ⎣ df ∂x

Solution: Using equation (1.16-23), we have:

This equation is equivalent to: iˆ   ∇ × A( f ) =

!

ˆj



∂ ∂ ∂ ∂x ∂y ∂z  ˆ  ˆ  ˆ A•i A• j A• k

! ! ∇ × A( f ) = −

or

(

!

!

) (

(

) (

! ! ! ! ⎡ ∂ A • kˆ ∂ A • ˆj ⎤ ⎡ ∂ A • kˆ ∂ A • iˆ ⎤ ! ! ⎥ iˆ − ⎢ ⎥ ˆj ∇ × A( f ) = ⎢ − − ∂z ⎥ ∂z ⎥ ⎢ ∂y ⎢ ∂x ⎣ ⎦ ⎣ ⎦ ! ! ⎡ ∂ A • ˆj ∂ A • iˆ ⎤ ⎥ kˆ +⎢ − ∂x ∂y ⎢⎣ ⎥⎦

)

(

and so:

) (

)

)

! dA ˆ •i df

! dA ˆ •j df

! dA ˆ •k df

∂f ∂x iˆ

∂f ∂y ˆj

∂f ∂z kˆ

iˆ ∂f = ∂x ! dA ˆ •i df

ˆj



∂f ∂y ! dA ˆ •j df

∂f ∂z ! dA ˆ •k df

or using equation (1.16-23) again: ! ! ! ! dA ! ∇ × A ( f ) = ∇f × df

3.2.4! !

CURL OF POSITION VECTORS

! We can calculate the curl of the position vector r : 121

! ! ∇×r =

!





ˆj

∂ ∂x

∂ ∂y

∂ ! ∂z

x

y

z

(3.2-21)

! !

or !

Then, using the results of Example 1-20: ! ! ! ∇×r = 0 !

! ! ⎡ ∂z ∂y ⎤ ⎡ ∂y ∂x ⎤ ⎡ ∂z ∂x ⎤ ∇ × r = ⎢ − ⎥ iˆ − ⎢ − ⎥ ˆj + ⎢ − ⎥ kˆ ! (3.2-22) ⎣ ∂x ∂z ⎦ ⎣ ∂y ∂z ⎦ ⎣ ∂x ∂y ⎦

Since the coordinates x , y , and z are all mutually independent, we obtain: ! ! ! ! ∇×r = 0!

(3.2-23)

!

Other useful identities are: ! ! ! ! ∇ × (A × r ) = 2 A !

! ! n! ∇ × ⎡ (r ) r ⎤ = 0 ! ⎣ ⎦

Solution:

By using indicial notation, show that equation (3.2-23) is correct: ! ! ! ! ∇×r = 0 Solution:

ˆj



iˆ ! ! A × r = ax

ay

az = ay z − az y iˆ − ( ax z − az x ) ˆj + ax y − ay x kˆ

x

y

z

(

Since the xj are all independent, we have: ! ! ! ∇ × r = ε i j k δ i j eˆk

)

(

)

We then have:

From equation (3.2-3) we have:

! ! ∂xj ∇ × r = εijk eˆk ∂xi

(3.2-25)

Example 3-9 ! If r is a position vector, show that: ! ! ! ! ! ∇ × (A × r ) = 2 A

Example 3-8

!

(3.2-24)

!

! ! ! ∇ × (A × r ) =

(a

y



ˆj



∂ ∂x

∂ ∂y

∂ ∂z

z − az y

) − (a

x

z − az x )

(a

x

y − ay x

) 122

! since the direction of Δ r will approach that of the tangent to

or !

the space curve as Δt → 0 (see Section 2.3).

! ! ! ∇ × ( A × r ) = ( ax + ax ) iˆ + ay + ay ˆj + ( az + az ) kˆ

(

)

and so: ! ! ! ! ! ∇ × (A × r ) = 2 A

3.2.5! !

INFINITESIMAL ROTATION AND THE CURL

We will now consider a point particle that is moving in a

circular path around a fixed axis. We will let this axis be collinear with a coordinate system axis. The origin of the coordinate system is at some point O on the axis as shown in Figure 3-3. The radius of rotation is a and the linear velocity tangential to the circular space curve formed by the particle’s ! path is v . The angle between the rotation axis and the direction ! of the position vector r of the point particle is θ . We then have: ! ! a = r sin θ = r sin θ ! (3.2-26) !

In a very small time interval Δt , the point particle will

rotate about the axis through a very small angle Δ β . The ! position vector of the point particle will change by Δ r during Δt . We then will have: !

Δr a Δβ ! lim = lim Δt → 0 Δt Δt → 0 Δt

(3.2-27)

Figure 3-3!

!

Point particle rotating in a circular path. Area that is enclosed by the path of the particle is shaded.

For a point particle having an infinitesimal rotation, we

then obtain: !

dβ dr =a = aω ! dt dt

(3.2-28)

123

where the angular speed ω is given by: !

ω=

dβ ! dt

and so: (3.2-29)

dr = v = r ω sin θ ! dt

(3.2-30)

We can then write: ! ! ! v = r ω sin θ ! ! and so: !

! ! ! ∇ × v = ∇ × ⎡⎣ ω y z − ω z y iˆ + (ω z x − ω x z ) ˆj

(

)

(

(3.2-31)

)

+ ω x y − ω y x kˆ ⎤⎦ ! (3.2-35)

!

From equations (3.2-28) and (3.2-26), we have: !

!

or

!

! ! ∇×v =



ˆj



∂ ∂x

∂ ∂y

∂ ∂z

!

(3.2-36)

ω y z − ωz y ωz x − ωx z ωx y − ω y x ! ! ! v= ω × r !

(3.2-32) ! ! ! where ω is angular velocity and v is orthogonal to both ω and ! r . We note that although angular velocity is a point vector,

Since the angular velocity is constant, we can write: ! ! ! ! ∇ × v = 2 ω x iˆ + ω y ˆj + ω z kˆ = 2 ω ! (3.2-37)

angular displacement is not (see Section 1.8).

Therefore, we have finally:

!

We will now consider the case where the angular velocity ! ! ω is constant. Taking the curl of v in equation (3.2-32), we obtain: ! ! ! ! ! ! ∇ × v= ∇ × (ω × r ) ! (3.2-33) In component form we have:

!

iˆ ! ! ! ∇ × v = ∇ × ωx x

ˆj

! or !

! 1 ! ! 1 ! ω = ∇ × v = curl v ! 2 2

! ! ! ! ! ! ∇ × v= ∇ × (ω × r ) = 2 ω !

)

(3.2-38)

(3.2-39)

!

The curl can then be seen to have physical interpretations ! ! of both circulation and rotation. For a vector field B , curl B



ω y ωz ! y

(

z

(3.2-34)

provides a measure of the rotation in the neighborhood of the point being considered. Therefore, it is possible to have 124

! ! curl B = 0 at a point in the field without it being necessary that ! ! ! ! curl B = 0 for all points of the field. If curl B ≠ 0 for an entire ! field B , the field is considered to be a rotational field, vortex

field, or eddy field.

or ! ! ! ∂2az ∂2a y ∂ 2 ay ∂ 2 az ∂2 ax ∂ 2 ax ! (3.3-4) ∇•∇ × A = + + − − − ∂x ∂y ∂y ∂z ∂z ∂x ∂y ∂z ∂x ∂y ∂x ∂z

and so:

3.3!

CURL AND DIVERGENCE IDENTITIES

!

A number of important identities involving the operations ! of curl and divergence can be derived. If A is a vector function and

f

is a scalar function, and if both functions are

continuously differentiable, we can write: !

! ! ⎡ ∂2 f ∂2 f ⎤ ˆ ⎡ ∂2 f ∂2 f ⎤ ˆ ∇ × ∇f = ⎢ − i − − ⎥ ⎢ ⎥j ∂y ∂z ∂z ∂y ∂x ∂z ∂z ∂x ⎣ ⎦ ⎣ ⎦

or !

! ! ! ∇•∇ × A =

(3.3-5)    ! To derive an identity for ∇ • ( A × B ) , we note that we can ! ! ! operate on A with ∇ while holding B constant (we will use the ! ! ! notation ∇A for this), and we can then operate on B with ∇ ! ! while holding A constant (we will use the notation ∇B for this). product. We therefore have: ! ! ! ! ! ! ! ! ! ! ∇ • ( A × B ) = ∇A • ( A × B ) + ∇B • ( A × B ) ! ! ! ! ! ! ! = ∇A • ( A × B ) − ∇B • ( B × A ) ! (3.3-6) By changing the order of the terms in equation (3.3-6), we can

! ! ! ∇ × ∇f = 0 !

(3.3-2)

We also have:

!

! ! ! ∇• ∇ × A = 0!

This is simply the procedure for taking the derivative of a

⎡ ∂2 f ∂2 f ⎤ ˆ +⎢ − ⎥ k ! (3.3-1) ⎣ ∂x ∂y ∂y ∂x ⎦

!

!

∂ ∂x

∂ ∂y

∂ ∂z

∂ ∂x

∂ ∂y

∂ ! ∂z

ax

ay

az

(3.3-3)

eliminate the need for subscripts: ! ! ! ! ! ! ! ! ! ! ∇ • ( A × B) = B • ∇ × A − A • ∇ × B ! (3.3-7) ! Note that since ∇ is an operator, we generally will find: ! ! ! ! ∇ × A • A ≠ 0! (3.3-8) which appears to be contrary to equation (1.18-11), but is ! ! nevertheless true. The gradient operator ∇ changes A in the 125

! ! vector product ∇ × A into a different vector before the scalar ! product with A is executed.

Example 3-10 ! ! ! ! Show that ∇ × A • A ≠ 0 at the point P (1, 1, 1) when A is given ! by A = y iˆ + z ˆj + x kˆ .

! ! ! ! ! ! ! ! ! ∇ × ( A × B ) = ∇A • B A − ∇A • A B

(

!

) ( ) ! ! ! ! ! ! − (∇ • A )B +(∇ • B )A !

!

B

or !

! ! ! ! ! ! ! ! ! ∇ × ( A × B ) = ∇B • B A − ∇A • A B

(

) ( ) ! ! ! ! ! ! + ( B • ∇ )A − (A • ∇ )B !

!

A

Solution:

!

!

! ! ∇× A=



ˆj

∂ ∂x

∂ ∂y

y

z

(

∂ = − iˆ − ˆj − kˆ ∂z

)

At the point P (1, 1, 1) : ! ! ! ! ∇ × A • A = − 1− 1− 1 = − 3 ≠ 0 ! !

Using equation (1.19-2), we have:

) (

)

Solution:

   To derive an identity for ∇ × ( A × B ) , we can write: ! ! ! ! ! ! ! ! ! ∇ × ( A × B ) = ∇A × ( A × B ) + ∇B × ( A × B ) ! (3.3-9) ! ! ! ! ! ! ! ! ! ∇ × ( A × B ) = ∇A × ( A × B ) − ∇B × ( B × A ) !

) (

as was given in equation (3.2-37).

or !

) (

Example 3-11 ! If ω is a constant vector, show that: ! ! ! ! ! ! ∇ × v= ∇ × (ω × r ) = 2 ω !

x

)(

(3.3-12)

B

and so we have finally: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ × ( A × B ) = ∇ • B A − ∇ • A B + B • ∇ A − A • ∇ B ! (3.3-13)



! ! ! ∇ × A • A = − iˆ − ˆj − kˆ • y iˆ + z ˆj + x kˆ = − y − z − x

(

(3.3-11)

B

(3.3-10)

From equation (3.3-13), we have: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ × (ω × r ) = ∇ • r ω − ∇ • ω r + r • ∇ ω − ω • ∇ r

(

)

(

) (

)

(

)

! Since ω is constant, we have: ! ! ! ! ! ! ! ! ! ! ∇ × (ω × r ) = ∇ • r ω − ω • ∇ r

(

)

(

)

From equation (3.1-28), we have: 126

!

! ! ∇•r = 3

and so we have finally: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ ( A • B) = A • ∇ B + B • ∇ A + A × ∇ × B + B × ∇ × A

(

and from equation (3.1-34), we have: ! ! ! ! ! ω •∇ r = ω

(

)

!

(3.3-15)

! ! ! ! ! ! ! ! ! B × ∇A × A = ∇A ( A • B ) − B • ∇A A !

(3.3-16)

!

(

(

)

(

)

(

)

(3.3-20)

(

From equation (1.19-2) we have: ! ! ! ! ! ! ! ! ! ! A × ∇B × B = ∇B ( A • B ) − A • ∇B B !

)

(

As shown in Example (3-12), we can also derive: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ ( A • B) = ∇ • A B + ∇ • B A + A × ∇ × B + B × ∇ × A

   To derive an identity for ∇ ( A • B ) we can write: ! ! ! ! ! ! ! ! ! ∇ ( A • B ) = ∇A ( A • B ) + ∇B ( A • B ) ! (3.3-14)

(

)

!

Therefore ! ! ! ! ! ! ! ! ! ∇ × v = ∇ × (ω × r ) = 3ω − ω = 2 ω !

) (

) )

or

) (

) (

)

!

(

)

(3.3-21)

Other useful identities are:

!

!

! ! !

!

!

!

(∇ × A) × A = ( A • ∇ ) A − 12 ∇ ( A) !

(3.3-22)

!

! ! ! 1 ! ! ! ! 2 A × ∇ × A = ∇ ( A) − A • ∇ A ! 2

(3.3-23)

!

( A × ∇ ) × A = 12 ∇ ( A)

(

)

!

!

(

!

!

2

2

)

! ! ! − A ∇• A !

(

)

(3.3-24)

!

! ! ! ! ! ! ! ! ! ∇B ( A • B ) = A × ∇B × B + A • ∇B B !

(3.3-17)

!

    ∇ × f ∇g = ∇f × ∇g !

(3.3-25)

!

! ! ! ! ! ! ! ! ! ∇A ( A • B ) = B × ∇A × A + B • ∇A A !

(3.3-18)

!

! ! ! ∇ • ∇f × ∇g = 0 !

(3.3-26)

!

! ! ! ∇ × f ∇f = 0 !

(3.3-27)

!

( A × ∇ ) • B = A • (∇ × B ) !

( (

) ( ) (

) )

Using equations (3.3-17) and (3.3-18) to rewrite equation (3.3-14), we have: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ ( A • B ) = A × ∇B × B + A • ∇B B+ B × ∇A × A + B • ∇A A

(

!

) (

)

(

) (

)

(

)

(

!

)

!

!

!

!

!

(3.3-28)

(3.3-19) 127

Example 3-13

Example 3-12 Show that: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ ( A • B) = ∇ • A B + ∇ • B A + A × ∇ × B + B × ∇ × A

(

) (

) (

)

(

)

Show that: !

Solution:

(

! ! ! ! ! ! 1 ! 2 ∇ × A × A = A • ∇ A − ∇ ( A) 2

)

!

! ! ∇× A=

From equation (1.19-5) we have: ! ! ! ! ! ! ! ! ! ! A × ∇B × B = ∇B ( A • B ) − ∇B • B A

)

(

)

!

!

!

!

!

!

!

ˆj

∂ ∂x

∂ ∂y

ax

ay

A

A

!

!

! ! ! ! ! ! ! ! ! ∇B ( A • B ) = ∇• B A + A × ∇ × B

) (

)

) (

)

!

We then have: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ ( A • B) = ∇ • A B + ∇ • B A + A × ∇ × B + B × ∇ × A

(

) (

) (

!

!

!

(∇ × A ) × A =

∂az ∂y

− ax

and ! ! ! ! ! ! ! ! ! ! ∇A ( A • B ) = ∇• A B + B × ∇ × A

(

∂ ⎡ ∂az ∂ay ⎤ ˆ ⎡ ∂ax ∂az ⎤ ˆ =⎢ − ⎥ i + ⎢ ∂z − ∂x ⎥ j ∂z ⎣ ⎦ ⎣ ∂y ∂z ⎦ az



A

or

(



⎡ ∂ay ∂ax ⎤ ˆ +⎢ − k ∂y ⎥⎦ ⎣ ∂x

( B × ∇ ) × A = ∇ ( A • B) − ( ∇ • A ) B

!

!



!

and !

)

Solution:

   To derive an identity for ∇ ( A • B ) we can write: ! ! ! ! ! ! ! ! ! ! ∇ ( A • B ) = ∇A ( A • B ) + ∇B ( A • B )

(

(

)

(

)

! !



ˆj ∂ay

∂ax

∂z

∂z

− ay

∂az

∂ay

∂x

∂x



∂ax ∂y

az

∂az ∂ax ⎤ ˆ ∂a ⎡ ∂ax az − az − y a y + ay i =⎢ ∂x ∂x ∂y ⎥⎦ ⎣ ∂z ∂ az ∂ax ∂a ⎡ ∂ay ⎤ ax − ax − az + y az ⎥ ˆj +⎢ ∂y ∂y ∂z ⎦ ⎣ ∂x ∂ az ⎤ ˆ ∂ax ∂ay ⎡ ∂ az ay − ay − ax + ax k +⎢ ∂z ∂z ∂x ⎥⎦ ⎣ ∂y 128

Adding and subtracting

∂ay ∂ax ay ax for the iˆ component, ∂y ∂x

∂a for the ˆj component, and z az for the kˆ component, we ∂z

have:

(

!

Show that:     ∇ × f ∇g = ∇f × ∇g !

(

)

Solution:

! ! ! ⎡ ∂ax ∂ax ∂ax ⎤ ˆ ∇ × A × A = ⎢ ax + ay + az i ∂y ∂z ⎥⎦ ⎣ ∂x

)

!

∂ay ∂ay ⎤ ⎡ ∂ay ˆj + ⎢ ax + ay + az ⎥ ∂y ∂z ⎦ ⎣ ∂x

!

∂a ∂a ⎤ ⎡ ∂a + ⎢ ax z + ay z + az z ⎥ kˆ ∂y ∂z ⎦ ⎣ ∂x

!

∂ay ∂a ⎡ ∂ax ⎤ ay + z az ⎥ iˆ −⎢ ax + ∂x ∂x ⎦ ⎣ ∂x

!

∂ay ∂a ⎡ ∂ax ⎤ ay + z az ⎥ ˆj −⎢ ax + ∂y ∂y ⎦ ⎣ ∂y

!

∂ay ∂a ⎡ ∂ax ⎤ ay + z az ⎥ kˆ −⎢ ax + ∂z ∂z ⎦ ⎣ ∂z

From equation (3.2-19) we have: ! ! ! ! ! ! ! ∇ × f ∇g = ∇f × ∇g + f ∇ × ∇g

(

)

From equation (3.3-2) we have: ! ! ! ! ∇ × ∇g = 0 Therefore we have:     ! ∇ × f ∇g = ∇f × ∇g

(

)

Example 3-15 Show that: ! ! ! ! ∇ • ∇f × ∇g = 0 Solution:

or !

Example 3-14

!

!

!

! ! !

! ! !

! ! !

!

(∇ × A) × A = ( A • ∇ ) A − 12 ∇ ( A • A) = ( A • ∇ ) A − 12 ∇ ( A)

2

From equation (3.3-25) we have:     ! ∇ × f ∇g = ∇f × ∇g ! ! Letting A = f ∇g , we can write:

(

)

129

!

! ! ! ! ∇ × A = ∇f × ∇g

and so: ! ! ! ! ! ! ! ∇ • ∇f × ∇g = ∇ • ∇ × A From equation (3.3-5) we then have: ! ! ! ! ∇ • ∇f × ∇g = 0

!

!

!

!

!

!

( A × ∇ ) • B = A • (∇ × B )

!

Solution: We have: ! ! ⎛ ∂ ⎛ ∂⎞ ∂ ∂⎞ ∂ ∂⎞ ! A × ∇ = ⎜ ay − az ⎟ iˆ + ⎛⎜ az − ax ⎟ ˆj + ⎜ ax − ay ⎟ kˆ ⎝ ∂x ⎝ ∂z ⎝ ∂y ∂y ⎠ ∂z ⎠ ∂x ⎠

Therefore: Example 3-16 Show that: ! ! ! ! ∇ × f ∇f = 0

(

)

!

(

)

From equations (1.17-5) and (3.3-2) we have: ! ! ! ! ! ! ∇f × ∇f = 0 ! ∇ × ∇f = 0 ! Therefore we have: ! ! ! ! ∇ × f ∇f = 0

(

)

!

!

∂ ∂⎞ ˆ ! ⎛ ∂ ∂⎞ ˆ ! − a i • B + a − a ⎜⎝ z ⎟ j•B y z x ∂z ∂y ⎟⎠ ∂x ∂z ⎠

(

)

(

)

(

! ⎛ ∂ ∂⎞ + ⎜ ax − ay ⎟ kˆ • B ⎝ ∂y ∂x ⎠

!

Solution: From equation (3.2-19) we have: ! ! ! ! ! ! ∇ × f ∇f = ∇f × ∇f + f ∇ × ∇f !

!

( A × ∇ ) • B = ⎛⎜⎝ a

)

or using equation (1.16-22): !

!

!

!

( A × ∇ ) • B = ⎛⎜⎝ a

y

∂ ∂⎞ ∂⎞ ⎛ ∂ − az ⎟ bx + ⎜ az − ax ⎟ by ⎝ ∂x ∂z ∂y ⎠ ∂z ⎠ ⎛ ∂ ∂⎞ + ⎜ ax − ay ⎟ bz ⎝ ∂y ∂x ⎠

! Reordering this equation, we have: !

!

!

( A × ∇) • B = a

!

x

⎛ ∂bz ∂by ⎞ ⎛ ∂by ∂bx ⎞ ⎛ ∂bx ∂bz ⎞ − + a − + a ⎜ ⎟ y z ⎜⎝ ∂y ∂z ⎟⎠ ⎜⎝ ∂x − ∂y ⎟⎠ ⎝ ∂z ∂x ⎠

or Example 3-17 Show that:

!

!

!

!

!

!

!

( A × ∇ ) • B = A • (∇ × B ) 130

From equation (3.4-1) we see that ∇ 2 f is the divergence of

!

3.4! ! !

LAPLACIAN The Laplacian operator ∇ 2 is defined as: ! ! ∇2 = ∇ • ∇ !

the gradient of f : ! ! ∇ • ∇f = ∇ 2 f ! ! (3.4-1)

or, in rectangular coordinates: ∇2 =

!

∂2 ∂2 ∂2 ! + + ∂x 2 ∂y 2 ∂z 2

(3.4-2)

(3.4-4)

The Laplacian is a measure of the changing gradient of a field around the point P ( x, y, z ) . Therefore the Laplacian is a measure of the change in the change of a scalar field. We can view the Laplacian as a measure of the uniformity of a scalar field.

3.4.1! !

!

LAPLACIAN OF A SCALAR FIELD

We will consider a set of points of a scalar field that is

!

If f and g are scalar functions, we have: ! ! ! ! ∇ • f ∇g = f ∇ 2g + ∇f • ∇g !

(3.4-5)

!

! ! ∇ 2 ( f g ) = f ∇ 2g + 2 ∇f • ∇g + g ∇ 2 f !

(3.4-6)

(

described by a scalar function f . If the function f is defined and continuously differentiable for every point P ( x, y, z ) of the set, then the Laplacian of f at a point P ( x, y, z ) is given in the

Example 3-18

rectangular coordinate system by: !

∂2 f ∂2 f ∂2 f ! ∇ f= + + ∂x 2 ∂y 2 ∂z 2 2

(3.4-3)

If f and g are scalar functions, show that: ! ! ! ! ! ∇ • f ∇g = f ∇ 2 g + ∇f • ∇g

(

Therefore the Laplacian is a second order differential operator that can act upon a scalar field. From equation (3.4-3) we see that ∇ 2 f is a function of the scalar field for the point at which

∇ 2 f is being computed. Associated with any region of a scalar field for which a function f is defined and differentiable is another scalar field defined by ∇ 2 f .

)

)

Solution: From equation (3.1-18) we have: ! ! ! ! ! ! ! ∇ • f ∇g = ∇f • ∇g + f ∇ • ∇g

(

or !

)

! ! ! ! ∇ • f ∇g = f ∇ 2 g + ∇f • ∇g

(

)

131

Example 3-19

3.4.3! !

Solution: From equation (2.12-24) we have: ! ! ! ! ! ! ∇ 2 ( f g ) = ∇ • ∇ ( f g ) = ∇ • f ∇g + g ∇f

(

FIELD SOURCES AND SINKS

Poisson’s equation is often written in the form:

∇2 f = ± 4π ρ !

!

(3.4-9)

where ρ is the strength of a source (plus sign) or of a sink

)

(minus sign) per unit volume. Poisson’s equation arises when a ! ! field contains a source or sink (see Section 1.1.2). If B = ∇f ,

From equation (3.4-5) we have: ! ! ! ∇ 2 ( f g ) = f ∇ 2 g + 2 ∇f • ∇g + g∇ 2 f

!

(3.4-8)

then we have Poisson’s equation.

If f and g are scalar functions, show that: ! ! ∇ 2 ( f g ) = f ∇ 2g + 2 ∇f • ∇g + g ∇ 2 f !

3.4.2!

∇2f ≠ 0 !

!

LAPLACE’S AND POISSON’S EQUATIONS

If we have:

equation (3.4-9) can be rewritten as: ! ! ! ∇ • B = ± 4π ρ ! (3.4-10) ! If the divergence of B is positive, ρ is the strength density of a ! ! source of the vector field B , and if the divergence of B is

the departure from uniformity of the function f at the point

negative, ρ is the strength density of a sink of the vector field ! B . The strength density ρ is a measure of how much the source ! or sink can affect the vector field B .

maxima or minima in a region where equation (3.4-7) is valid.

3.4.4!

Equation (3.4-7) is known as Laplace’s equation. A scalar

!

function f is said to be harmonic in a region if it is continuous,

From equation (1.19-2), we have: ! ! ! ! ! ! ! ! ! ! ∇ × ∇ × A = ∇ ∇• A − ∇•∇ A!

!

∇2f = 0 !

(3.4-7)

P ( x, y, z ) will then be a minimum. Moreover, there will be no

has continuous second partial derivatives, and satisfies Laplace’s equation (3.4-7) in the region. If the Laplacian of f is not equal to zero:

LAPLACIAN OF A VECTOR FIELD

! The Laplacian can also be applied to a vector field A .

(

)

(

) (

)

(3.4-11)

or 132

!

! ! ! ! ! ! ! ∇ × ∇ × A = ∇ ∇ • A − ∇ 2A !

(

)

(

)

(3.4-12)

We will now define: ! ! ! ! ! ! ! ∇×∇× A ≡∇× ∇× A !

(

!

)

(3.4-13)

so that we have: ! ! ! ! ! ! ! ! ∇ × ∇ × A = ∇ ∇ • A − ∇ 2A !

(

)

(3.4-14)

!

! ! ! ! ! ! ! ∇ 2A = ∇ ∇ • A − ∇ × ∇ × A ! ! ! ! Expressions for ∇ 2 ∇f and ∇ • ∇ 2 A

(

(

(3.4-15)

)

using equation (3.4-15): ! ! ! ! ! ! ! ! ∇ 2 ∇f = ∇ ∇ • ∇f − ∇ × ∇ × ∇f ! ! ! ! or, since ∇ × ∇f = 0 from equation (3.3-2): ! ! ! ∇ 2 ∇f = ∇ ∇ 2 f !

(

)

(

( )

(

can be derived

)

(3.4-16)

(

)

(

(

)

)

(

)

(3.4-20)

As noted previously, if we have ∇ 2 f = 0 in a region, then

have:

(

)

∇4 f = ∇2 ∇2 f = 0 !

(3.4-21)

in a region, then f is considered to be biharmonic in the biharmonic function is harmonic.

3.4.5! If

LAPLACIAN OF A POSITION VECTOR n ! r is a position vector, the Laplacian of ( r )

determined using equation (2.12-39): ! ! n ! n n−2 ! ∇ 2(r ) = ∇ • ∇(r ) = ∇ • n(r ) r ! !

(

)

)

(

f is considered to be harmonic in the region. Similarly, if we

!

(3.4-17)

Similarly, we have: ! ! ! ! ! ! ! ! ! ! ∇ • ∇ 2 A = ∇ • ⎡⎣∇ ∇ • A − ∇ × ∇ × A ⎤⎦ ! (3.4-18) ! ! ! Letting B = ∇ × A , we can rewrite equation (3.4-18) as: ! ! ! ! ! ! ! ! ∇ • ∇2 A = ∇ 2 ∇ • A − ∇ • ∇ × B ! (3.4-19) ! ! ! or, since ∇ • ∇ × B = 0 from equation (3.3-5):

(

)

region. Every harmonic function is biharmonic, but not every

)

( )

( )

(

!

Therefore !

! ! ! ! ∇ • ∇2 A = ∇ 2 ∇ • A !

!

)

)

can be

(3.4-22)

and equation (3.1-18): !

! ! n−2 n n−2 ! ! ∇ 2(r ) = n r • ∇(r ) + n (r ) ∇ • r !

(3.4-23)

From equations (2.12-39) and (3.1-28), we then obtain: !

∇ 2 ( r ) = n ⎡ r rˆ • ( n − 2 ) ( r ) ⎣ n

n−3

n−2 rˆ + 3 ( r ) ⎤ ! ⎦

(3.4-24)

or !

∇ 2 (r ) = n ⎡r (n − 2) (r ) ⎣ n

n−3

+ 3 (r )

n−2

⎤! ⎦

(3.4-25) 133

and so: !

∇ ( r ) = n ( n + 1) ( r ) 2

n

n−2

!

(3.4-26)

Laplace’s equation is then obtained if n = 0 or n = − 1 . Therefore we have: !

⎡1 ⎤ ∇2 ⎢ ⎥ = 0 ! ⎣r ⎦

( r ≠ 0 )!

Other useful identities are: ! ! ! ! ! ! ! ∇ 2 ( A • r ) = 2 ∇ • A + r • ∇ 2A !

(3.4-27)

!

!

! n−2 ! ∇ ( r ) A ⎤ = n ( n + 1) ( r ) A ! ⎣ ⎦

(3.4-30)

!

! n! ∇ 2 ⎡( r ) r ⎤ = 0 ! ⎣ ⎦

(3.4-31)

(3.4-29)

n

Example 3-20 2

! where r is a position vector.

Solution: Using equation (3.4-26), we have: !

∇ 2 ( r ) = 2 ( 2 + 1) ( r ) = 6 2

! ! ! ∂ ! ! ˆ ∂ ! ! ˆ ∂ ! ! ˆ ∇( A • r ) = ( A • r ) i + ∂y ( A • r ) j + ∂z ( A • r ) k ∂x

!

(3.4-28)

! ∇ r =0! 2!

Calculate ∇ 2 ( r )

Solution:

or

!

2⎡

Example 3-21 ! If r is a position vector, show that: ! ! ! ! ! ! ∇ 2 ( A • r ) = 2 ∇ • A + r • ∇ 2A !

0

!

! ! ! ! ! ! ⎡ ∂A ! ⎤ ⎡ ∂ A ⎤ ∂ A ⎡ ! !⎤ ∇( A • r ) = ⎢ • r ⎥ iˆ + ⎢ • r ⎥ ˆj + ⎢ • r ⎥ kˆ ⎣ ∂x ⎦ ⎣ ∂z ⎦ ⎣ ∂y ⎦ ! ! ! + ⎡⎣ A • iˆ ⎤⎦ iˆ + ⎡⎣ A • ˆj ⎤⎦ ˆj + ⎡⎣ A • kˆ ⎤⎦ kˆ

From equation (1.16-23), we have: ! ! ! ! ! ! ⎡ ∂A ! ⎤ ! ⎡ ∂ A ⎤ ∂ A ⎡ ! !⎤ ∇( A • r ) = ⎢ • r ⎥ iˆ + ⎢ • r ⎥ ˆj + ⎢ • r ⎥ kˆ + A ! ⎣ ∂x ⎦ ⎣ ∂z ⎦ ⎣ ∂y ⎦ Taking the divergence: ! ! ! ! ! ∂ ⎡ ∂A ! ⎤ ∂ ⎡ ∂A ! ⎤ ∂ ⎡ ∂A ! ⎤ ! ! 2 ! ∇ (A• r ) = •r + •r + •r ⎥+∇• A ∂x ⎢⎣ ∂x ⎥⎦ ∂y ⎢⎣ ∂y ⎥⎦ ∂z ⎢⎣ ∂z ⎦ or

! ! ! ! ! ! ! ! ! ⎡ ∂ 2A ∂ 2A ∂ 2A ⎤ ⎡ ∂A ∂A ˆ ∂A ˆ ⎤ ˆ ! ∇ (A• r ) = r • ⎢ 2 + 2 + 2 ⎥ + ⎢ • i + •j+ • k⎥ ∂x ∂y ∂z ∂x ∂y ∂z ⎣ ⎦ ⎣ ⎦ ! ! ! + ∇• A 2

134

! where the direction of dS is taken to be outward. If nˆ is a unit

Therefore !

! ! ! ! ⎡ ∂a ∂ay ∂az ⎤ ! ! ∇ 2 ( A • r ) = r • ∇ 2A + ⎢ x + + +∇• A ∂y ∂z ⎥⎦ ⎣ ∂x

vector directed outward from the surface S , Gauss’s theorem given in equation (3.5-1) can be written as:

and so: ! ! ! ! ! ! ! ∇ 2 ( A • r ) = 2 ∇ • A + r • ∇ 2A

3.5! !

!

∫∫∫

! ! ∇ • B dV =

V

S

surface integral) can be changed into a triple integral over the

GAUSS’S THEOREM

enclosed volume (volume integral), and conversely. The

Integral theorems involving vectors are among the most

integrals in equation (3.5-2) are independent of the choice of coordinate system.

theorems generally have been developed for regions of space

!

that are simply connected. A region is simply connected if each simple closed curve within the region can be contracted into a single point without encountering any points not in the region. Otherwise the region is multiply connected. We will assume that all curves, surfaces, and volumes discussed in this chapter exist in simply connected regions of space.

! If V is a volume bounded by a closed surface S , and if B

is a continuous vector function continuously differentiable in

V , then the vector integral theorem known as Gauss’s theorem or the divergence theorem is: !

∫∫∫

V

! ! ∇ • B dV =

∫∫

S

! ! B • dS !

(3.5-2)

Using Gauss’s theorem, a double integral over a surface (vector

powerful relations in applied mathematics. These integral

!

∫∫

! B • nˆ dS !

(3.5-1)

Equation (3.5-2) can be derived directly from the integral ! definition of the divergence of a vector field B at a point

P ( x, y, z ) in a volume element ΔV given by equation (3.1-5):

!

! ! ∇ • B = lim

ΔV→ 0

1 ΔV

∫∫

! B • nˆ dS !

(3.5-3)

S

When we sum over all the cubical volume elements ΔV in ! a finite volume V , the contributions of B • nˆ from all adjacent !

internal surface elements will cancel because of opposing ! directions of the unit normal vector nˆ . Therefore only the B • nˆ contributions from the exterior surface elements of the volume

V will remain (see Figure 3-4). These contributions, summed over the external area S , yield equation (3.5-2) from the integral definition of the divergence. 135

! ! B = ρ υ represents the outward rate of flow of the fluid per unit

area. From the discussions in Section 3.1.1, we know that the surface integral in equation (3.5-2) is represents the sum of the outward flow of the fluid per unit time through all the differential areas of the surface S that enclose the volume ∆V. The volume integral in equation (3.5-2) is equivalent to the total outward flow per unit time in and out of the volume V . Obviously these two terms must be equal. Example 3-22 ! For a fluid of density ρ and flow velocity υ , derive the

equation of continuity:

∂ρ ! ! + ∇ • (ρυ ) = 0 ∂t

! Figure 3-4!

! All contributions of B • nˆ from adjacent internal surface elements of ΔV cancel. Only contributions that remain are those from the exterior surface elements (longer arrow).

Solution: We will consider a very small volume element ΔV having a surface ΔS through which the fluid is freely flowing. We will let this volume element remain constant with time. The

3.5.1! !

total quantity Q of the fluid within ΔV at any time is given

PHYSICAL INTERPRETATION OF GAUSS’S THEOREM

The physical implications of equation (3.5-2) can be

understood by again considering that we have a fluid flowing through a region. At every point of the region, the flux vector

by: !

Q=

∫∫∫

ΔV

ρ dV

The rate of increase of Q within ΔV is: 136

∂Q = ∂t

!

∂ρ dV ΔV ∂t

∫∫∫

Example 3-23

since ΔV remains constant. Assuming there are no sources or sinks of the fluid within ΔV , ∂Q ∂t must equal the rate of influx of the fluid through the surface ΔS bounding ΔV .

Evaluate the integral: !

S

We then have:

∂Q =− ∂t

!

∫∫

where S is a closed surface.

! ρ υ • nˆ dS = −

ΔS

! ! ρ υ • dS

∫∫

Solution:

ΔS

Using Gauss’s theorem we can write:

where nˆ is the outward directed unit normal vector to ΔS . Using Gauss’s theorem given in equation (3.5-2), we can

!

!

∫∫∫

∂Q = ∂t

∫∫∫

∫∫∫

Therefore:

! ! ∇ • ( ρ υ ) dV

ΔV

!

∫∫

S

or !

∫∫∫

ΔV

! ⎤ ⎡ ∂ρ ! ⎢⎣ ∂t + ∇ • ( ρ υ ) ⎥⎦ dV = 0

Since the dimensions of the volume element ΔV arbitrary, we have finally: ∂ρ ! ! ! + ∇ • (ρυ ) = 0 ∂t

V

(3.1-28) we have: ! ! ! ∇•r = 3

ΔV

∂ρ dV = − ΔV ∂t

∫∫∫

! ! ∇ • r dV

where V is the volume enclosed by S . From equation

! ! ∇ • ( ρ υ ) dV

We then have: !

∫∫

! ! r • dS =

S

write: ∂Q =− ∂t

∫∫

! ! r • dS

3.5.2! are

!

! ! r • dS = 3

∫∫∫ dV = 3V V

CURL THEOREM

A relation similar to Gauss’s theorem given in equation

(3.5-2), but involving a vector product can be derived. If the ! vector function B is written in terms of the vector product of a ! ! vector function A and an arbitrary constant vector D : 137

!

! ! ! B = A× D!

(3.5-4)

then equation (3.5-2) can be written: !

∫∫∫

V

! ! ! ∇ • ( A × D ) dV =

∫∫

S

!

!

!

!

! !

!

! ! ( A × D ) • nˆ dS !

(3.5-5)

!

!

!

∫∫∫ ( D • ∇ × A − A • ∇ × D ) dV = ∫∫ ( D • nˆ × A) dS ! (3.5-6) V

S

! ! ∇ × A dV −

! ⎡ D•⎢ ⎣

∫∫∫

∫∫∫

! ! ∇ × A dV =

V

! ⎤ nˆ × A dS ⎥ = 0 ! ⎦ S

∫∫

!

V

∫∫

(3.5-7)

!

∫∫∫

V

∫∫

! ! ! B • ∇ × A dV =

(

)

!

!

S

∫∫∫

! ! ! A • ∇ × B dV +

(

V

)

From equation (3.3-7) we have: ! ! ! ! ! ! ! ! ! ∇ • ( A × B) = B • ∇ × A − A • ∇ × B !

(

(3.5-8)

∫∫∫

! ! ! B • ∇ × A dV =

(

V

∫∫∫

!

∫∫ ( A × B) • dS

(3.5-11)

V

! ! ! A × dS !

V

!

)

!

!

!

∫∫ ( A × B) • dS S

)

(

)

We then have:

S

! ! ∇ × A dV = −

∫∫∫

(

Using Gauss’s theorem, show that:

or !

! ! ! A • ∇ × B dV +

Solution:

! Finally, since D is an arbitrary vector, we must have: ! nˆ × A dS !

)

Example 3-24

! Since D is a constant vector, we can write: !

(

V

Using equation (3.3-7) and rearranging a triple product: !

∫∫∫

! ! ! B • ∇ × A dV =

)

∫∫∫

! ! ! A • ∇ × B dV +

(

V

)

∫∫∫

V

! ! ! ∇ • ( A × B ) dV

Using Gauss’s theorem we can write: (3.5-9)

S

Equation (3.5-9) is known as the curl theorem. We can use the

!

∫∫∫

V

! ! ! B • ∇ × A dV =

(

)

∫∫∫

V

! ! ! A • ∇ × B dV +

(

)

!

!

!

∫∫ ( A × B) • dS S

curl theorem to obtain another integral definition of the curl: !

! ! ∇ × A = lim

1 ΔV→ 0 ΔV

∫∫

! nˆ × A dS !

S

This equation can be compared to equation (3.2-4). !

Using the Gauss’s theorem we can show that:

(3.5-10)

3.6!

GRADIENT THEOREM

!

If f is a scalar function that is continuously differentiable ! and D is an arbitrary constant vector, we can use equation (3.1-18) to write: 138

!

! ! ! ! ! ! ∇ • ( f D ) = D • ∇f + f ∇ • D !

! or, since D is a constant vector: ! ! ! ! ! ∇ • ( f D ) = D • ∇f !

(3.6-1)

(3.6-2)

Applying Gauss’s theorem given in equation (3.5-1) to the ! vector f D , we have: !

∫∫∫

V

! ! ∇ • ( f D ) dV =

∫∫

! ! f D • dS !

(3.6-3)

equation (3.6-2), we then obtain: ! ! D • ∇f dV =

∫∫∫

! ! f D • dS !

∫∫

V

(3.6-4)

S

! Since D is a constant vector, we can rewrite equation (3.6-4) as: !

! ∇f dV −

! ⎡ D•⎢ ⎣

∫∫∫

∫∫∫

! ∇f dV =

V

∫∫

S

! ⎤ f dS ⎥ = 0 ! ⎦

(3.6-5)

! Finally, since D is an arbitrary vector, we must have: !

V

∫∫

we know that there must exist a point P ∗ within V such that:

! f dS !

P



=

1 V

∫∫∫

! ∇f dV !

V

∗ (at P )!

(3.6-7)

Using the gradient theorem given in equation (3.6-6), we can rewrite equation (3.6-7) as: ! ∇f

!

P



=

1 V

∫∫

! f dS !

(3.6-8)

S

If V is now taken to be a small volume element ΔV that is allowed to approach zero, the points P and P ∗ must become ! essentially the same point within ΔV , and so ∇f at P is then given by: ! ∇f = lim

!

Δ∇→ 0

1 ΔV

∫∫

! f dS !

S

(3.6-9)

Equation (3.6-9) is the integral definition of the gradient of a scalar field. This definition of the gradient is independent of the choice of coordinate system.

(3.6-6)

Example 3-25

S

If S is a closed surface, show that:

Equation (3.6-6) is known as the gradient theorem. !

! ∇f

!

S

where V is a volume bounded by a closed surface S . Using

!

this volume. Using the mean-value theorem for triple integrals,

We will now let P be any given point in a volume V and

we will assume that the scalar function f and the vector ! function ∇f are defined and continuous for all points within

!

∫∫

! ! dS = 0

S

Solution: 139

where the space curve C

From the gradient theorem given in (3.6-6), we have: !

! f dS =

∫∫

S

∫∫∫

(counterclockwise) direction when nˆ is an outward pointing

! ∇f dV

unit normal vector (righthand rule). Using Stokes’s theorem, a

V

! ! Letting f = 1 , we obtain ∇f = 0 , and so: ! ! ! ! dS = 0 dV = 0

∫∫

line integral over a closed curve can be changed into a double integral (vector surface integral) over the surface bounded by

∫∫∫

S

is traversed in the positive

the closed curve, and conversely.

V

Note the difference in the integrals: ! ! dS = 0 ! dS = S !

∫∫

∫∫

S

S

One integral is a zero vector and the other is neither a vector nor zero.

3.7!

STOKES’S THEOREM

! ! ! If B is a continuously differentiable vector function and r is a position vector, we can write Stokes’s theorem as: !

∫∫

S

! ! ! ∇ × B • dS =

"∫

! ! B • dr !

(3.7-1)

C

where S is a surface enclosed by a simple closed curve C . From equation (2.4-6) we then have: !

∫∫

S

! ! ∇ × B • nˆ dS =

"∫

! B • Tˆ ds !

(3.7-2)

Figure 3-5!

Portion of an area element S showing division into many small area elements ΔS . Circular arcs in each of the area elements indicates the integration ! ! directions for B • d r .

C

140

!

Stokes’s theorem as given in equation (3.7-1) can be

obtained directly from the integral definition of the curl given in equation (3.2-4): ! ! ! nˆ • ∇ × B = lim

(

)

ΔS → 0

1 ΔS

!∫

! ! B • dr !

(3.7-3)

C

We will let S be an area bounded by the simple closed curve C ! ! that encloses the point P at which ∇ × B is being evaluated. To derive Stoke’s theorem, we begin by dividing the area S into many very small area elements ΔS (see Figure 3-5). The normal vectors nˆ for all the infinitesimal area elements will be directed outward from the surfaces ΔS . !

! ! We can now sum together the contributions of B • d r for

all small area elements ΔS as ΔS → 0 in the area S . Those

Figure 3-6!

contributions from inner adjacent line segments of the area element boundaries will then cancel because of opposing ! directions of the vector d r (see Figure 3-6). !

As a result only the contributions from line segments that

3.7.1!

lay on the exterior perimeter to the area S (which is C ) will remain after integrating over the surface elements. Therefore we obtain Stokes’s theorem equation (3.7-1) from the integral definition of the curl given in equation (3.7-3).

! Integration paths for the scalar product B • d r! . All ! ! internal summations of B • d r cancel due to the ! opposing directions of d r , leaving only the contributions along the exterior boundary C .

PHYSICAL INTERPRETATION OF STOKES’S THEOREM

!

The line integral in Stokes’s theorem given in equation ! ! (3.7-1) is the circulation of the flux vector B = ρ υ around the boundary curve C as was shown in Section 3.2.1. The surface integral in equation (3.7-1) is obtained by summing the curl given in equation (3.7-3) over all area elements ΔS . This surface integral represents the sum of the rotations of the vector field 141

across the area S enclosed by the curve C . All parts of this rotation will self-cancel except for components parallel to the boundary of the surface. This surface integral is then equal to the circulation around the closed curve C bounding S . Therefore Stokes’s theorem states that the circulation of the flow around C is equal to the vorticity through the surface

!

We see from Stokes’s theorem given in equation (3.7-1)

that, if the closed curve C remains fixed, the shape of the



! ! ! D × ∇f • dS =

∫∫

S

!∫

! ! f D • dr !

(1.18-6): −

! ! ! D • ∇f × dS =

∫∫

S

!∫

! ! f D • dr !

! Again since D is a constant vector:

the vector integrand is the curl of a vector field.

3.7.2!

!

changing the value of the line integral. This is only true when

OTHER FORMS OF STOKES’S THEOREM

Two other forms of Stokes’s theorem can be derived. ! ! Letting B = f D in Stokes’s theorem given in equation (3.7-1), ! where f is a scalar function and D is an arbitrary constant

∫∫

∫∫

!

!

vector, we have:



∫∫

"∫

! ! f D • dr !

! ! ∇f × dS =

S

!∫

!∫ f dr ! !

(3.7-8)

(3.7-9)

C

or !

∫∫

! nˆ × ∇f dS =

S

! ! ! ∇ × ( f D ) • dS =

(3.7-7)

C

! ! ⎡ ! !⎤ −D•⎢ ∇f × dS + f dr ⎥ = 0 ! ⎣ S ⎦ C ! Finally, since D is an arbitrary vector:

surface bounded by the closed curve can change without

(3.7-6)

C

or changing the scalar triple product according to equation

!

enclosed by C . !

! Since D is a constant vector, we have:

!∫

! f dr =

C

!∫ f Tˆ ds !

(3.7-10)

C

Using the vector relation given in equation (3.2-19), we can

which is another form of Stokes’s theorem. ! ! ! ! We can also let B = D × A in Stokes’s theorem as given in ! equation (3.7-1), where D is again an arbitrary constant vector.

rewrite equation (3.7-4) in the form:

We then have:

!

S

!

!

!

!

(3.7-4)

C

!

!

!

! ∇f × D + f ∇ × D ) • dS = f D • dr ! ( ∫∫ "∫ S

C

(3.7-5)

!

∫∫

S

! ! ! ! ∇ × ( D × A ) • dS =

"∫

! ! ! D × A • dr !

(3.7-11)

C

142

Changing the triple products:

∫∫

!

S

as shown in Example 3-28. Using this equation to write:

! ! ! ! nˆ • ∇ × ( D × A ) dS =

! ! ! D • A × dr !

!∫

(3.7-12)

C

!

!

!

!

!

(3.7-13)

C

∫∫

! ! ! D • nˆ × ∇ × A dS =

! ⎡ D•⎢ ⎣

! ! nˆ × ∇ × A dS +



(

S

)

C

! Since D is a constant vector: !

∫∫ (



!

∫∫ (

)

S

(3.7-14)

∫∫ (

)

S

! !⎤ A × dr ⎥ = 0 ! ⎦ C

"∫

(3.7-15)

!

∫∫

S

)

!∫

(3.7-16)

S

!

!

S

(3.7-20)

C

!

!

!

!

C

! (3.7-21)

C

Using Stokes’s theorem as given in equation (3.7-1) with ! ! B = ∇f , we have:

!∫

! Tˆ × A ds !

(3.7-17)

!

)

!∫

C

∫∫

! ! ! ∇ × ∇f • dS =

∫∫

! ! ! ∇ × ∇f • dS =

S

C

∫∫ (

!

Solution:

! ! A × dr !

From Stoke’s theorem we can obtain: ! ! ! ! ! ! ! ! f ∇ × A • dS = A × ∇f • dS + f A • dr !

(

!

(3.7-19)

C

Using Stokes’s theorem, show that: ! ! ! ! ∇ × ∇f = 0

"∫

! ! ∇f • d r

C

! where r is a position vector and C is a simple closed curve. From equation (2.12-7) we then obtain:

which again is a form of Stokes’s theorem. !

!

! ∇f × ∇g ) • dS + g∇f • dr ! ( ∫∫ !∫

S

C

! ! nˆ × ∇ × A dS =

!

! ∇f × ∇g ) • dS = − g ∇f • dr = − gdf ( ∫∫ "∫ "∫

!

or !

∫∫

)

!

S

! ! ! g ∇ × ∇f • dS =

(

!

! A × ∇g ) • dS + g A • dr ! ( ∫∫ !∫

! ! ! Since ∇ × ∇f = 0 , from equation (2.12-7) we have finally:

! Finally, since D is an arbitrary vector: ! ! nˆ × ∇ × A dS =

)

Example 3-26

)

S

!∫

! ! ! D • A × dr !

!

S

or !

∫∫

(

! ! and letting A = ∇f :

! nˆ × ∇ ) • D × A dS = D • A × dr ! ( ∫∫ !∫ S

! ! ! g ∇ × A • dS =

S

Changing a triple product again: !

!

(3.7-18)

!

S

"∫ df = 0 C

143

Since this equation is valid for any surface S enclosed by a simple curve C , we must have: ! ! ! ∇ × ∇f = 0 !

Solution: From equation (3.2-19) we have: ! ! ! ! ! ! ! ∇ × ( f A ) = ∇f × A + f ∇ × A We then have:

Example 3-27 !

Using Stokes’s theorem, show that: !

"∫

!

∫∫ ( A × ∇f ) • dS + ∫∫ ∇ × ( f A) • dS

!

!

!

S

!

!

!

!

S

(

)

!

!

!

!

S

S

Using Stoke’s theorem we can write:

Using Stokes’s theorem as given in equation (3.7-9): ! f dr = −

∫∫

! ! ∇f × dS

!

∫∫

S

! ! ! f ∇ × A • dS =

(

)

!

!

!

!

! A × ∇f ) • dS + f A • dr ( ∫∫ "∫ S

C

S

C

! ! Letting f = 1 , we obtain ∇f = 0 , and so:

"∫

∫∫

! ! ! f ∇ × A •dS =

S

Solution:

!

!

or

! where r is a position vector and C is a simple closed curve.

!∫

∫∫ (∇f × A) • dS + ∫∫ ( f ∇ × A) •dS

S

! ! dr = 0

C

!

∫∫

! ! ! ∇ × ( f A ) • dS =

! dr = −

3.8! !

! ! ! 0 × dS = 0

∫∫

There are four different vector integral theorems that are

collectively known as Green’s theorems. These theorems can all

S

C

GREEN’S THEOREMS

be derived using either Stokes’s theorem or Gauss’s theorem.

!

Example 3-28

3.8.1!

Using Stokes’s theorem, show that:

!

∫∫

S

! ! ! f ∇ × A • dS =

(

)

!

!

!

!

! A × ∇f ) • dS + f A • dr ( ∫∫ !∫ S

C

GREEN’S THEOREM IN THE PLANE

Stokes’s theorem can be used to derive Green’s theorem in

the plane. We will restrict ourselves to a region R in the x − y ! plane and let the position vector r be given by: 144

! r = x iˆ + y ˆj !

!

! and the vector function B be expressed as: ! B = M iˆ + N ˆj ! !

(3.8-1)

In the x − y plane we can write:

dS = dx dy !

! (3.8-2)

where M and N are continuously differentiable functions. If S

and so we have:

∫∫

!

S

is an area in the region R , then we have from Stokes’s theorem

∫∫

! ! ∇ × B • kˆ dS =

S

⎡ ∂N ∂M ⎤ ⎢ ∂x − ∂y ⎥ dx dy = ⎣ ⎦

"∫

! ! B • dr !

C

(3.8-3)

double integral over the enclosed area, and conversely. Example 3-29

!

Show that:

We can rewrite this equation as:

∫∫

S

"∫ ( M iˆ + N ˆj ) • (dx iˆ + dy ˆj ) ! C

(3.8-4)

!

! where the curl of B is given by: ! ! ∇× B=



ˆj

∂ ∂x

∂ ∂y

M

N

∂ = − ∂N iˆ + ∂M ˆj + ⎡ ∂N − ∂M ⎤ kˆ ! (3.8-5) ⎢ ∂x ∂y ⎥ ∂z ∂z ∂z ⎣ ⎦ 0

∫∫

S

⎡ ∂N ∂M ⎤ ⎢ ∂x − ∂y ⎥ dS = ⎣ ⎦

⎡ ∂ f ∂g ∂ f ∂g ⎤ − dx dy = ⎢ ∂y ∂x ⎥⎦ S ⎣ ∂x ∂y

∫∫

!∫ f dg C

Solution:



Letting !

M= f

∂g ! ∂x

N= f

∂g ∂y

and using Green’s theorem in the plane, we have:

Therefore we have: !

(3.8-8)

integral around a closed path in a plane can be changed into a

vector notation for the rectangular coordinate system. ! ! ∇ × B • kˆ dS =

C

Green’s lemma. Using Green’s theorem in the plane, a line

Equation (3.8-3) is Green’s theorem in the plane expressed in

!

!∫ ( M dx + N dy) !

Equation (3.8-8) is known as Green’s theorem in the plane or

given in equation (3.7-1): !

(3.8-7)

!∫ ( M dx + N dy) ! C

(3.8-6)

!

⎡ ∂ ⎛ ∂g ⎞ ∂ ⎛ ∂g ⎞ ⎤ ⎢ ⎜f ⎟ − ∂y ⎜⎝ f ∂x ⎟⎠ ⎥ dx dy = S ⎣ ∂x ⎝ ∂y ⎠ ⎦

∫∫

!∫

C

⎡ ∂g ∂g ⎤ f ⎢ dx + dy ⎥ ∂y ⎦ ⎣ ∂x 145

3.8.2!

or !

!

⎡ ∂ f ∂g ∂ 2 g ∂ f ∂g ∂ 2g ⎤ + f − − f ⎢ ⎥ dx dy ∂x ∂y ∂y ∂x ∂y ∂x ⎦ S ⎣ ∂x ∂y

∫∫

!

=

!∫

C

⎡ ∂g ∂g ⎤ f ⎢ dx + dy ∂y ⎥⎦ ⎣ ∂x

and so: !

!

⎡ ∂ f ∂g ∂ f ∂g ⎤ − dx dy = ⎢ ∂y ∂x ⎥⎦ S ⎣ ∂x ∂y

∫∫

!∫

C

(3.8-12)

where p and q are continuously differentiable scalar functions,

"∫

! ! B • dr =

"∫ ( M dx + N dy) = 0 !

(3.8-9)

see that we then must have:

∫∫

(see Section 3.12).

V

(

)

∫∫

! p ∇q • nˆ dS !

(3.8-13)

S

From equation (3.1-18), we have: ! ! ! ! ! ! ! ! ! ∇ • p ∇q = p ∇ • ∇q + ∇p • ∇q = p ∇ 2q + ∇p • ∇q ! (3.8-14)

(

)

(

)

!

!

(3.8-10)

!

!

∫∫∫ ( p ∇ q + ∇p • ∇q) dV = ∫∫ p ∇q • dS ! 2

!

V

Equation (3.8-10) will be true in general only if we have: ∂N ∂M ! = ∂x ∂y

! ! ∇ • p ∇q dV =

From equations (3.8-14) and (3.8-13), we then have:

C

⎡ ∂N ∂M ⎤ − dx dy = 0 ! ⎢ ∂y ⎥⎦ S ⎣ ∂x

∫∫∫

!

From Green's theorem in the plane given in equation (3.8-8), we

!

additional Green’s theorems. Letting ! ! ! B = p ∇q !

theorem given in equation (3.5-2):

f dg

If the contour integral in equation (3.8-3) is path

C

!

By using Gauss’s theorem, it is possible to derive

we can obtain the following integral relation from Gauss’s

independent, then we must have: !

GREEN’S FIRST THEOREM

(3.8-15)

S

which is Green’s first theorem.

3.8.3! !

(3.8-11) !

GREEN’S SECOND THEOREM

Using equation (2.12-15), we can write:

! dq ∇q • nˆ = ! dn

(3.8-16)

which is the directional derivative of q in the direction of nˆ . From equations (3.8-15), (3.8-13), and (3.8-16), we then have: 146

∫∫∫ (

!

! ! p ∇ 2q + ∇p • ∇q dV =

)

V

∫∫

p

S

dq dS ! dn

! ! functions A and B replacing the scalar functions p and q . This (3.8-17)

By interchanging p and q in equation (3.8-12), we can also obtain: !

!

dp q ∇ p + ∇q • ∇p ) dV = q ( ∫∫∫ ∫∫ dn dS ! 2

!

V

(3.8-18)

Subtracting equation (3.8-18) from equation (3.8-17), we have finally:

∫∫∫ ( p ∇ q − q ∇ p ) dV = ∫∫

!

2

V

S

dp ⎤ ⎡ dq ⎢⎣ p dn − q dn ⎥⎦ dS ! (3.8-19)

Equation (3.8-19) is known as Green’s second theorem. !

In equation (3.8-19) V is a closed volume bounded by the

surface S , and both p and q are scalar functions with continuous second derivatives within V . The derivative d dn is with respect to the normal to dS . Green’s second theorem can also be written in the form: !

∫∫∫ ( p ∇ q − q ∇ p) dV = ∫∫ ( 2

V

2

! ! p ∇q − q ∇p • nˆ dS ! (3.8-20)

S

identity), is given by: !

2

"

2

!

)

(

!

(

) (

)

! ! ! ! ! ! ! ⎡ B × ∇ × A + B ∇ • A ⎤ • dS ! (3.8-21) ⎣ ⎦ S

∫∫

(

) (

)

(

)

! ! ! ! ∇ • ⎡⎣ A ∇ • B ⎤⎦ dV = V

∫∫∫

(

)

)

! ! ! ! A ∇ • B • dS !

(3.8-22)

! ! ! ! A × ∇ × B • dS !

(3.8-23)

∫∫

S

(

)

and !

! ! ! ! ∇ • ⎡⎣ A × ∇ × B ⎤⎦ dV = V

∫∫∫

(

)

∫∫

(

S

)

! ! We next interchange the roles of A and B in equations (3.8-22) and (3.8-23) and then subtract the new equations from the original equations (equations before interchange):

3.8.4!

!

It is possible to derive an equation similar to Green’s



! ! ! ! ! ! ! ⎡ A × ∇ × B + A ∇ • B ⎤ • dS ⎣ ⎦

Green’s third theorem can be obtained by first applying Gauss’s ! ! ! theorem given in equation (3.5-1) to the vectors A ∇ • B and ! ! ! A× ∇× B :

!

GREEN’S THIRD THEOREM

S

!

where we have used equation (3.8-16).

!

"

∫∫∫ ( A • ∇ B − B • ∇ A) dV = ∫∫ V

S

2

equation, known as Green’s third theorem (or Green’s third

! ! ! ! ! ! ! ∇ • ⎡⎣ A ∇ • B − B ∇ • A ⎤⎦ dV V

∫∫∫

(

) (

=

! ! ! ! ! ! ! ⎡ A ∇ • B − B ∇ • A ⎤ • dS ! ⎣ ⎦

∫∫

S

(

)

) (

)

(3.8-24)

second theorem given in equation (3.8-19) with the vector 147

and

!

! ! ! ! ! ! ! ∇ • ⎡⎣ A × ∇ × B − B × ∇ × A ⎤⎦ dV V

∫∫∫

!

(

=

!

)

(

)

(

)

(

∫∫∫

V

! ! ! ! ! ! ! ⎡ A × ∇ × B − B × ∇ × A ⎤ • dS ! ⎣ ⎦ S

∫∫



)

! ! ! ! ! ! ! ! ⎡ A • ∇ × ∇ × B − B • ∇ × ∇ × A ⎤ dV ⎣ ⎦

(

=

! (3.8-25)

)

(

! ! ! ! ! ! ! ⎡ A × ∇ × B − B × ∇ × A ⎤ • dS ! (3.8-29) ⎣ ⎦ S

∫∫

(

)

is given in equation (3.4-14):

(3.8-26) and (3.8-27) to obtain two additional equations.

!

Equations (3.8-26) and (3.8-27) (before and after interchange of ! ! A and B ) can then be used to replace terms in equations

!

(

)

(

) (

)(

) (

)

)(

(

)

)

!

(

∫∫∫

!

(

)

=

(

∫∫

S

and

)

(

)

(

)

! ! ! ! ! ! ! ⎡ A × ∇ × B − B × ∇ × A ⎤ • dS ! (3.8-30) ⎣ ⎦ S

∫∫

(

)

(

)

Adding equations (3.8-28) and (3.8-30), we have Green’s third theorem as given in equation (3.8-21). !

Green’s third theorem can also be written in the form: ! ! ! ! ! ! ! ! ⎡ A • ∇ × ∇ × B − B • ∇ × ∇ × A ⎤ dV ⎣ ⎦ V

∫∫∫

(

=

)

∫∫

S

)

) (

(

(

)

! ! ! ! ! ! ! ⎡ B × ∇ × A − A × ∇ × B ⎤ • dS ! (3.8-31) ⎣ ⎦

(

)

(

)

Example 3-30 Show that:

! ! ! ! ! ! ! ⎡ A ∇ • B − B ∇ • A ⎤ • dS ! (3.8-28) ⎣ ⎦

(

)

=

!

! ! ! ! ! ! ! ! ⎡ A • ∇ ∇ • B − B • ∇ ∇ • A ⎤ dV ⎣ ⎦ V

∫∫∫

)

! ! ! ! ! ! ! ! ! ! ! ! ⎡ A • ∇ 2 B − B • ∇ 2 A − A • ∇ ∇ • B + B • ∇ ∇ • A ⎤ dV ⎣ ⎦ V

(3.8-24) and (3.8-25), giving the following two relations: !

(

Equation (3.8-29) can be rewritten using the vector identity that

! Using the vector identity given in equation (3.1-18) with ! ! ∇ • B replacing the scalar f , we have: ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ • ⎡⎣ A ∇ • B ⎤⎦ = A • ∇ ∇ • B + ∇ • B ∇ • A ! (3.8-26) ! ! Using the vector identity given in equation (3.3-7) with ∇ × B ! replacing the vector B , we have: ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ • ⎡ A × ∇ × B ⎤ = ∇ × B • ∇ × A − A • ∇ × ∇ × B ! (3.8-27) ⎣ ⎦ ! ! ! The vectors A and B are next interchanged in equations

(

)

)

! !

! ! ! ! ! ! ! ! ⎡ A • ∇ × ∇ × B − B • ∇ × ∇ × A ⎤ dV ⎣ ⎦ V

∫∫∫

(

)

(

=

∫∫

S

)

! ! ! ! ! ! ! ⎡ B × ∇ × A − A × ∇ × B ⎤ • dS ⎣ ⎦

(

)

(

)

148

Solution:

We can rewrite the first volume integral above:

Green’s third theorem as given in equation (3.8-21) is: !

!

"

"

! ! ! ! ! ! ! ⎡ A × ∇ × B + A ∇ • B ⎤ • dS ⎣ ⎦

!

∫∫∫ ( A • ∇ B − B • ∇ A) dV = ∫∫ 2

2

V

(

S



!

∫∫

S

) (

)

(

) (

)

∫∫∫

(



!

)

(

)

(

=

!

∫∫

S

)

+

!

(

)

∫∫

)

S

(

)

!

) (

)

! ! ! ! Since ∇ • A and ∇ • B are scalars, we have from equation

(3.1-18): ! ! ! ∇ • ⎡⎣ A ! ! ! ∇ • ⎡⎣ B

( (

! ! ! ! ∇ • B ⎤⎦ = A • ∇ ! ! ! ! ∇ • A ⎤⎦ = B • ∇

) )

(

! ! ! ! ! ! ∇• B + ∇• B ∇• A

(

! ! ! ! ! ! ∇• A + ∇• A ∇• B

) (

)(

) (

)

(

)

)(

(

=

)

! ! ! " ! " ! ! ⎡ ∇• A ∇ • B − ∇• B ∇ • A ⎤ dV ⎣ ⎦ V

∫∫∫

(

)

)

(

)

! ! ! " " ! ! ! ⎡ A • ∇ ∇ • B − B • ∇ ∇ • A ⎤ dV ⎣ ⎦ V

∫∫∫

(

)

(

)

! ! ! ! ! ! ! ⎡ A ∇ • B − B ∇ • A ⎤ • dS ⎣ ⎦

∫∫

=

(

) (

)

! ! ! ! ! ! ! ⎡ A ∇ • B − B ∇ • A ⎤ • dS ⎣ ⎦ S

∫∫

!

(

) (

)



! " ! ! ! ! ! ! ⎡ A • ∇ × ∇ × B − B • ∇ × ∇ × A ⎤ dV ⎣ ⎦ V

∫∫∫

(

=

!

∫∫

S

)

)

(

)

! ! ! ! ! ! ! ⎡ A × ∇ × B − B × ∇ × A ⎤ • dS ⎣ ⎦

(

+

!

)

∫∫

S

(

)

! ! ! ! ! ! ! ⎡ A ∇ • B − B ∇ • A ⎤ • dS ⎣ ⎦

(

) (

)

The first and last integrals cancel and we have:

)

Therefore: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! A • ∇ ∇ • B − B • ∇ ∇ • A = ∇ • ⎡⎣ A ∇ • B ⎤⎦ − ∇ • ⎡⎣ B ∇ • A ⎤⎦

(

(

and so we obtain:

! ! ! ! ! ! ! ⎡ A ∇ • B − B ∇ • A ⎤ • dS ⎣ ⎦

(

)

S

! ! ! ! ! ! ! ⎡ A × ∇ × B − B × ∇ × A ⎤ • dS ⎣ ⎦

(

(

!

! " ! ! ! ! ! ! ⎡ A • ∇ × ∇ × B − B • ∇ × ∇ × A ⎤ dV ⎣ ⎦ V

∫∫∫

∫∫∫

From Gauss’s theorem we then have: !

! ! ! " " ! ! ! ⎡ A • ∇ ∇ • B − B • ∇ ∇ • A ⎤ dV ⎣ ⎦ V

! ! ! " " ! ! ! ⎡ A • ∇ ∇ • B − B • ∇ ∇ • A ⎤ dV ⎣ ⎦ V

!

! ! ! ! ! ! ! ⎡ B × ∇ × A + B ∇ • A ⎤ • dS ⎣ ⎦

Using equation (3.4-12), we have: !

!

(

)

! !

! ! ! ! ! ! ! ! ⎡ A • ∇ × ∇ × B − B • ∇ × ∇ × A ⎤ dV ⎣ ⎦ V

∫∫∫

(

)

(

=

)

! ! ! ! ! ! ! ⎡ B × ∇ × A − A × ∇ × B ⎤ • dS ⎣ ⎦ S

∫∫

(

)

(

)

149

! ! ! B = ∇× A!

!

3.9! !

VECTOR FIELD CLASSIFICATIONS Vector fields are generally classified into two primary

categories: solenoidal and irrotational. Based upon these two categories, vector fields can be divided into four types. Before discussing these four types, the characteristics of solenoidal fields and irrotational fields will be presented. For the following discussions, we will assume that the vector fields can be described by vector functions over the region of interest. We will also assume that the vector functions being considered are

(3.9-2) ! where A is termed a vector potential. Therefore, a vector ! function B describing a solenoidal field can always be ! expressed as the curl of another vector function A . The vector ! ! A is not unique, however, since A can always be replaced by ! ! A + ∇g (where g is any scalar) without changing equation ! ! ! (3.9-2). This follows from the relation ∇ × ∇g = 0 as obtained from equation (3.3-2). We then have: ! ! ! ! ! ! ! B = ∇ × A = ∇ × A + ∇g ! (3.9-3)

(

)

From Gauss’s theorem given in equation (3.5-1) together with

continuous and are continuously differentiable.

equation (3.9-1), we also have for a solenoidal field:

3.9.1! !

SOLENOIDAL FIELDS

! A vector field described by a vector function B is

solenoidal if we have: ! ! ! ∇• B = 0 !

(3.9-1)

Equation (3.9-1) indicates that the flow entering a region of space is exactly balanced by the flow leaving it. Therefore, no ! sources or sinks of B exist within the region. Hence we have the name solenoidal, which is from the Greek and means tube (as in fluid flowing through a tube). ! ! ! ! Since ∇ • ∇ × A = 0 as given in equation (3.3-5), we have for a solenoidal field:

∫∫∫

!

! ! ∇ • B dV =

V

3.9.2! !

∫∫

! ! B • dS = 0 !

S

(3.9-4)

IRROTATIONAL FIELDS

! A vector field described by a vector function B is

irrotational if we have: ! ! ! ! ∇× B = 0!

(3.9-5) ! ! ! Using equations (3.9-5) and ∇ × ∇f = 0 , we can write: ! ! ! B = ∇f ! (3.9-6) where f is a scalar potential that is determined to within an arbitrary constant. Therefore, any vector field that can be 150

represented as the gradient of a scalar potential will be

an exact differential. It is often useful, therefore, to determine if

irrotational, and so all potential fields are irrotational. An

a vector function is irrotational before performing an

irrotational vector field is called lamellar. Using Stokes’s

integration of the vector along a curve. The characteristic

theorem given in equation (3.7-1), we also have for an

properties of solenoidal and irrotational vector fields are

irrotational field:

summarized in Table 3-1.

!

∫∫

S

! ! ! ∇ × B • dS =

"∫

! ! B • dr = 0 !

(3.9-7)

C

3.9.3! !

VECTOR FIELD TYPES

There are four types of vector fields based on the two

primary field categories of solenoidal and irrotational. These four field types are defined in Table 3-2.

Table 3-1! Properties of solenoidal and irrotational vector ! fields. A is a vector potential and f is a scalar potential.

! ! If a vector field described by a vector function B is ! irrotational, the integral of B along a curve C in the field will ! be independent of path and B will describe a conservative ! ! vector field (see Section 3.12). The scalar product B • d r is then

! Table 3-2! Vector field types for a vector field B . !

Type 1 vector field is irrotational and solenoidal.

Therefore, from equations (3.9-1) and (3.9-6), we have: ! ! ! ! ! ∇ • B = ∇ • ∇f = ∇ 2 f = 0 ! (3.9-8)

151

! ! ∇•A = 0 !

where f is a scalar potential. Vector fields of type 1 are

!

encountered in situations where Laplace’s equation pertains.

in equation (3.9-12), we have: ! ! ! ∇ 2A ≠ 0 !

Such vector fields are said to be Laplacian fields. An example of a type 1 vector field is the electric field that exists around a stationary point charge in space. !

Type 2 vector field is irrotational but not solenoidal.

Therefore, from equation (3.9-6) we have: ! ! ! ! ∇ • B = ∇ • ∇f = ∇ 2 f ≠ 0 ! !

(3.9-13)

(3.9-14)

This equation is similar to the Poisson’s equation (3.9-9), but ! has the vector potential A in place of the scalar potential f . From equation (3.9-3) we see that it is always possible to select a scalar g to ensure that equation (3.9-13) is true. Fields of type

(3.9-9)

3 are encountered, for example, in electromagnetics.

Vector fields of type 2 are encountered in situations where

!

Poisson’s equation pertains. An example of a type 2 vector field

This is a general vector field such as is encountered, for

is the gravitational field that exists due to a material body.

example, in elastodynamics.

!

Type 3 vector field is solenoidal, but not irrotational. A

vector field having a nonzero curl is called a vortex field. From equation (3.9-2), we have: ! ! ! ! B = ∇× A! ! where A is a vector potential. We then can write: ! ! ! ! ! ! ∇× B =∇×∇× A ≠ 0! !

(3.9-10)

(3.9-11)

This field does not have a scalar potential. From equation (3.4-14) we have: ! ! ! ! ! ! ! ! ! ∇ × ∇ × A = ∇ ∇ • A − ∇ 2A ≠ 0 !

(

By requiring that:

)

(3.9-12)

Type 4 vector field is not irrotational and is not solenoidal.

Example 3-31 ! ! If A and B are both irrotational vector fields, show that ! ! A × B is a solenoidal vector field. Solution: ! ! Since A and B are irrotational vector fields, we have: ! ! ! ! ! ! ! ∇× A = 0! ∇× B= 0 From equation (3.3-7) we then have: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ • ( A × B) = B • ∇ × A − A • ∇ × B = B • 0 − A • 0 = 0 ! ! and so A × B is a solenoidal vector field.

(

)

(

)

152

3.10!

HELMHOLTZ’S PARTITION THEOREM

!

Let us consider a general vector field described by a ! continuously differentiable vector function B for which: ! ! ! ! ∇ × B ≠ 0! (3.10-1) !

! ! ∇• B ≠ 0!

(3.10-2)

! ! ! ! course if B is irrotational, then BS = 0 , and if B is solenoidal, ! ! then BI = 0 .

!

From equations (3.10-6) and (3.10-7), we have: ! ! BI = ∇ϕ !

! ! ! BS = ∇ × A !

!

This vector field is, therefore, a field of type 4. Equations

so that:

(3.10-1) and (3.10-2) can be written as: ! ! ! ∇ × B = ± 4π ψ ! !

!

!

! ! ∇ • B = ± 4π σ !

(3.10-3)

(3.10-8)

! ! ! ! ! ! B = ∇ϕ + ∇ × A = BI + BS !

(3.10-9)

(3.10-10)

! where ϕ is a scalar potential and A is a vector potential.

(3.10-4)

From equations (3.10-4) and (3.10-10), we have: ! ! ! ! ! ! ! ∇ • B = ∇ • ∇ϕ + ∇ • ∇ × A = ± 4 π σ ! (3.10-11)

where the scalar σ is a source (plus sign) or a sink (minus sign) ! of the vector field and ψ is a vector term representing a vortex. ! ! Helmholtz’s partition theorem states that a vector field B ! can be represented as the sum of an irrotational vector field BI ! and a solenoidal vector field BS such that: ! ! ! B = BI + BS ! ! (3.10-5)

!

where

or, using the vector identities given in equations (3.3-2) and

! !

! ! ! ∇ × BI = 0 ! ! ! ∇ • BS = 0 !

(3.10-6)

or, using the vector identity given in equation (3.3-5): !

providing that this curl and divergence exist everywhere in the ! field B , and that they vanish sufficiently rapidly at infinity. Of

(3.10-12)

Similarly, from equations (3.10-3) and (3.10-10), we have: ! ! ! ! ! ! ! ! ! ∇ × B = ∇ × ∇ϕ + ∇ × ∇ × A = ± 4 π ψ ! (3.10-13) (3.4-14): !

(3.10-7)

∇ 2ϕ = ± 4 π σ !

! ! ! ! ! ∇ ∇ • A − ∇ 2A = ± 4 π ψ !

(

)

(3.10-14)

As we did in equation (3.9-13), we can require that: ! ! ! ∇• A = 0 ! (3.10-15) 153

and so equation (3.10-14) becomes: ! ! ∇ 2A = ∓ 4 π ψ ! ! !

(3.10-16)

Equations (3.10-12) and (3.10-16) are Poisson’s equations.

A technique for solving Poisson’s equations to obtain the ! potentials ϕ and A is presented in Appendix B using some equations developed in later chapters. The potentials can then be used in equation (3.10-10) to determine the vector function ! B . In Appendix B we also show that the vector field is uniquely determined in a volume V if the vector field is specified over the surface S enclosing V . !

In summary, therefore, any vector field that vanishes at

infinity generally can be written as the sum of an irrotational vector field and a solenoidal vector field. The vector field is then expressed in terms of its scalar sources or sinks and its ! vortices. If the potentials ϕ and A can be determined, it is ! possible to obtain the vector function B using equation (3.10-10).

3.11! !

OTHER POTENTIALS

! Using Helmholtz’ partition theorem, a vector function B

describing a vector field that vanishes at infinity can be written ! in terms of a scalar potential ϕ and a vector potential A as in

!

! ! ! ! B = ∇ϕ + ∇ × A !

(3.11-1)

!

! ! ∇• A = 0 !

(3.11-2)

In some cases, however, it is useful to write equation (3.11-1) in the form:

! ! ! ! ! ! ! B = ∇ϕ + ∇ × χ + ∇ × ∇ × ζ ! (3.11-3) ! ! where χ and ζ are two vector potentials. To derive equation ! (3.11-3) from equation (3.11-1), let A be the sum of two vectors: ! ! ! ! A = χ + χ∗ ! (3.11-4) ! If we then use Helmholtz’ partition theorem to write χ ∗ in !

terms of potentials, we have: ! ! ! ! ! χ ∗ = ∇ξ + ∇ × ζ ! !

! ! ∇ •ζ = 0 !

(3.11-5) (3.11-6)

where ξ is a scalar potential. From equations (3.11-1), (3.11-4), and (3.11-5), we can write: ! ! ! ! ! ! ! ! ! B = ∇ϕ + ∇ × χ + ∇ × ∇ξ + ∇ × ∇ × ζ ! !

(3.11-7)

or, using the vector identity given in equation (3.3-2): ! ! ! ! ! ! ! ! B = ∇ϕ + ∇ × χ + ∇ × ∇ × ζ ! (3.11-8) which is equation (3.11-3).

equations (3.10-10) and (3.10-15): 154

!

! A vector field that can be described by a vector function B

satisfying the relation: ! ! ! ! B× ∇× B = 0! !

(

)

2

Solution: Using equations (3.3-2) and (3.3-25), we can write: ! ! ! ! ! ! ! ! ! ! ∇ × B = ∇ × ∇ϕ + ∇ × λ ∇η = ∇λ × ∇η = µ B

(

)

Therefore:            ! ∇ϕ • ∇ × B = ∇ϕ • ∇λ × ∇η = ∇ϕ • µ B = µ ∇ϕ • ∇ϕ + λ ∇η

(

(3.11-11)

then, as is shown in Example 3-32, the abnormality factor µ is given by the equation: ! ! ! ∇ϕ • ∇λ × ∇η ! ! µ= ! 2 (3.11-12) 2 2 ! ∇ϕ − ( λ ) ∇η Example 3-32

!

(3.11-9)

is called a Beltrami field. For such a field, we see from equation ! ! ! (3.11-9) that B and ∇ × B must be collinear. Therefore we have: ! ! ! ! ∇× B = µB! (3.11-10) ! where µ is known as the abnormality factor. If we write B using potentials in the form:    B = ∇ϕ + λ ∇η ! !

! ! ! ∇ϕ • ∇λ × ∇η µ= ! 2 2 ! ∇ϕ − ( λ ) ∇η

or !

! ! ! ! ∇ϕ • ∇λ × ∇η = µ ⎡⎢ ∇ϕ ⎣

!

! = µ ⎡⎢ ∇ϕ ⎣ !

! = µ ⎡⎢ ∇ϕ ⎣

!

! ! + λ ∇ϕ • ∇η ⎤⎥ ⎦

2

2

2

! ! ! + λ B − λ ∇η • ∇η ⎤⎥ ⎦

(

We also have: ! ! ! ! ! ! µ B • ∇η = ∇λ × ∇η • ∇η

satisfies the relation: ! ! ! ! ∇× B= µB

or using equation (1.18-11): ! ! ! B • ∇η = 0

(

)

! ! 2 2 ! + λ B • ∇η − ( λ ) ∇η ⎤⎥ ⎦

! Show that if a vector field B described by: ! ! ! B = ∇ϕ + λ ∇η !

then we must have:

)

)

We then have: 155

! ! ! ! ∇ϕ • ∇λ × ∇η = µ ⎡⎢ ∇ϕ ⎣

!

2

2 2 ! − ( λ ) ∇η ⎤⎥ ⎦

!



! ∇f • d r =

P1

P2

∫ df = f P1

P2 −

fP1 !

(3.12-4)

! ! The integrand ∇f • d r is the exact differential df . Therefore a

or !

P2 !

! ! ! ∇ϕ • ∇λ × ∇η µ= ! 2 2 ! ∇ϕ − ( λ ) ∇η

line integral of a vector potential field is independent of path 2

and is a function only of the end points P1 and P2 . Both integrals in equation (3.12-3) are then independent of path. Any

3.12! ! !

CONSERVATIVE VECTOR FIELDS

We will now consider the integral:



P2

! ! ∇f • d r !

(3.12-1)

where f is a scalar function, P1 and P2 are two points on a ! ! space curve C , r is the position vector that defines C , and ∇f ! is the gradient of f . If ∇f exists in a region, it defines a vector field for the region. Letting ! ! ! B = ∇f ! (3.12-2) ! then the scalar f is, by definition, a scalar potential and B is a vector potential field. Therefore equation (3.12-1) can be written as: !



P1

! ∇f • d r =



! ! B • dr !

example of a conservative vector field is a gravitational field. !

If the integration in equation (3.12-1) is taken around a

closed curve, we will have P1 = P2 and so: !



P1 !

P1

! ∇f • d r =

P1

∫ df = f ( P ) − f ( P ) = 0 ! 1

1

(3.12-5)

P1

or !

"∫

C

! ! ∇f • d r = 0 !

(3.12-6)

We see, therefore, that around a closed path the line integral of the gradient of a scalar potential will always be zero (assuming

P2

(3.12-3)

P1

The integrals in equation (3.12-3) are line integrals. Using equation (2.12-7), we have:

called a conservative vector field. The line integral of any vector potential field will always be independent of path. An

P1

P2 !

vector field defined by the gradient of a scalar potential is

that the potential exists on the path and is single valued). ! ! We will now consider a vector function B representing a conservative vector field defined by the gradient of a scalar ! potential f . We will write B in the form: 156

! B ( x, y, z ) = M ( x, y, z ) iˆ + N ( x, y, z ) ˆj + P ( x, y, z ) kˆ ! (3.12-7)

!

From equation (3.12-2) we must then have: ! ! ∇f = M ( x, y, z ) iˆ + N ( x, y, z ) ˆj + P ( x, y, z ) kˆ !

(3.12-8)

∂f ∂f = N ( x, y, z ) ! = P ( x, y, z ) ! (3.12-9) ∂y ∂z

∂f = M ( x, y, z ) ! ∂x

From differential calculus and equation (3.12-9) we also have: !

df =

∂f ∂f ∂f dx + dy + dz = M dx + N dy + P dz ! (3.12-10) ∂x ∂y ∂z

Using equation (3.12-9) we can write: ∂M ∂ f ! = ∂z ∂z ∂x

(3.12-11)

!

∂N ∂2 f ! = ∂x ∂x ∂y

∂N ∂2 f ! = ∂z ∂z ∂y

(3.12-12)

!

∂P ∂ 2 f ! = ∂y ∂y ∂z

∂P ∂ 2 f ! = ∂x ∂x ∂z

(3.12-13)

!

∂N ∂M = ! ∂x ∂y

(3.12-14)

! ! ⎡ ∂P ∂N ⎤ ⎡ ∂P ∂M ⎤ ˆ ⎡ ∂N ∂M ⎤ ˆ ! ! ∇× B= ⎢ − iˆ + ⎢ − j+⎢ − k (3.12-15) ⎥ ∂y ⎥⎦ ⎣ ∂x ∂z ⎥⎦ ⎣ ∂y ∂z ⎦ ⎣ ∂x ! From equations (3.12-14) and (3.12-15) we see that for B to be a

conservative vector field we must have: ! ! ! ∇× B = 0! !

(3.12-16)

This is to be expected since we know that: ! ! ! ! ! ! ∇ × B = ∇ × ∇f = 0 !

(3.12-17)

2

! We can then conclude that, when a vector function B of the form given in equation (3.12-7) represents a conservative vector field, we have:

∂P ∂M = ! ∂x ∂z

from equations (3.12-2) and (3.3-2).

∂M ∂ f ! = ∂y ∂y ∂x 2

∂P ∂N ! = ∂y ∂z

! Using equation (3.12-7), the curl of B is given by:

or !

!

Example 3-33

! Show that if a vector field B is given by: ! ! B = 4 x y 3 iˆ + 6 x 2 y 2 ˆj + 5 kˆ ! then B is conservative. Solution: We have: ! ! B = 4 x y 3 iˆ + 6 x 2 y 2 ˆj + 5 kˆ = M iˆ + N ˆj + P kˆ !

∂P ∂N ! =0= ∂y ∂z

∂P ∂M ! =0= ∂x ∂z

∂N ∂M = 12 x y 2 = ∂x ∂y 157

! Therefore B is conservative.

3.13! !

VECTOR INTEGRAL DEFINITIONS

The integral definitions of gradient, divergence, and curl

can be written as follows: !

The gradient of a scalar field f is defined by equation (3.6-9):

! !

! ! ! !

! ∇f = lim

Δ∇→ 0

1 ΔV

∫∫ nˆ f dS ! S

(3.13-1)

! The divergence of a vector field B is defined by equation (3.1-5): ! ! ∇ • B = lim

ΔV→ 0

1 ΔV

∫∫

! nˆ • B dS !

∫∫

! nˆ × B dS !

S

(3.13-2)

! The curl of a vector field B is defined by equation (3.5-10): ! ! ∇ × B = lim

1 ΔV→ 0 ΔV

S

(3.13-3)

These three integral definitions of vector operations all

involve integration of a quantity over an area. In each case the operation is defined in terms of the normal to an area. The ! gradient operator ∇ and the unit normal nˆ perform similar functions in these definitions.

158

Chapter 4 Vectors in Curvilinear Coordinates

! dr = h1 dx1 eˆ1 + h2 dx 2 eˆ2 + h3 dx 3 eˆ3

159

!

While point vectors are coordinate system independent,

point vectors are specified for any given coordinate system by components that are dependent upon the coordinate system. Base vectors are used to resolve point vectors into components along the coordinate axes. Base vectors must be linearly independent, but they will, however, be dependent upon the coordinate system. Base vectors are generally not point vectors. !

In this chapter we will consider base vectors and the

components of point vectors in curvilinear coordinate systems in 3-dimensional space.

4.1! !

LINEAR RELATIONS AMONG VECTORS

! If the vectors ei are base vectors for a 3-dimensional

coordinate system, these vectors must be linearly independent or else they could represent at most a 2-dimensional coordinate system (in a plane). A necessary and sufficient condition for three vectors to be linearly independent is that the three vectors not be coplanar. There cannot be more than two ! independent vectors in any given plane. If any two ei are ! collinear, they will be linearly dependent, and then the three ei ! would be coplanar. If the three vectors ei are mutually orthogonal, they obviously will be linearly independent since they cannot be coplanar.

!

! If a point vector A written in terms of contravariant

components as in equation (1.15-4): ! ! ! ! ! A = a1 e1 + a 2 e2 + a 3 e3 = a i ei ! ! (4.1-1) ! is equal to 0 only in the trivial case when all the components a i ! are zero, then the vectors ei are linearly independent. None of these base vectors can then be expressed as a linear ! combination of the other base vectors. The base vectors ei in equation (4.1-1) need not be unit vectors. ! ! ! If the vector A is equal to 0 when at least one of the components a i is not zero (say a 1 ), then we can write: a2 ! a3 ! ! e1 = − 1 e2 − 1 e3 ! (4.1-2) a a ! ! In this case the vectors ei are linearly dependent, and e1 is ! ! ! coplanar with e2 and e3. In other words e1 can be obtained ! ! from some combination of the vectors e2 and e3 . ! ! If the three vectors ei are coplanar and therefore linearly

!

dependent, we have: ! ! ! ! e1 • e2 × e3 = 0 !

(4.1-3)

as given in equation (1.18-51). ! ! If the three vectors ei are linearly independent, we will have: !

! ! ! e1 • e2 × e3 ≠ 0 !

(4.1-4) 160

! If three linearly independent vectors ei are base vectors for

!

a right-handed coordinate system, we can use equation (1.21-5) to write equation (4.1-4) as: ! ! ! ! e1 • e2 × e3 > 0 !

(4.1-5)

or, if they are base vectors for a left-handed coordinate system, we can use equation (1.21-6) to write equation (4.1-4) as: ! ! ! ! e1 • e2 × e3 < 0 ! (4.1-6)

assuming for the moment that another set of components b i exists such that: ! ! ! A = a i ei = b i ei ! ! or !

! Determine if the vectors ei are independent where:

! e1 = iˆ − 2 ˆj !

!

! e2 = 3 ˆj !

! e3 = ˆj + kˆ

( ) ( ) ! × e = ( iˆ − 2 ˆj ) • ( 3 iˆ ) = 3

!

! ! e2 × e3 = 3 ˆj × ˆj + kˆ = 3 iˆ

!

! ! e1 • e2

3

! Therefore the ei are independent.

4.2!

UNIQUENESS OF VECTOR COMPONENTS

! ! ! If e!i are base vectors for a point vector A = a i ei , then the ! ! components a i of A = a i ei are unique. This can be shown by

(4.2-2)

(a

1

! ! ! ! − b1 e1 + a 2 − b 2 e2 + a 3 − b 3 e3 = 0 !

)

(

)

(

)

(4.2-3)

! Since the vectors ei are base vectors and so are linearly

independent, the coefficients of equation (4.2-3) must all be zero. Therefore we have: !

Solution:

! ! ! ! ! ! ! A = a1 e1 + a 2 e2 + a 3 e3 = b1 e1 + b 2 e2 + b 3 e3 !

We will then have: !

Example 4-1

(4.2-1)

a1 = b1 !

a2 = b2 !

a3 = b3 !

(4.2-4)

and so no other set of components can exist for the vector ! ! A = a i ei . Only one unique set of components can exist for any vector for a given base. !

The components of a vector can be defined simply as the

coefficients of expansion of the vector in terms of a given base. This definition can replace the geometrical definition using vector magnitude projections on coordinate axes. Any vector is completely determined by its components with respect to a given base.

161

4.3! !

!

ORTHOGONALIZATION

! ! ! If d1 , d2 , and d3 are three nonzero and non-coplanar

vectors, it is possible to construct a set of three orthogonal unit vectors eˆ1 , eˆ2 , and eˆ3 from them. The process through which this construction is accomplished is known as the GramSchmidt orthogonalization. ! !

Under this process the first vector eˆ1 is easily obtained: ! d1 eˆ1 = ! ! (4.3-1) d1

To obtain the second vector eˆ2 we write: ! ! e2 = d2 + β1 eˆ1 ! !

or !

! β1 = − d2 • eˆ1 !

(4.3-6)

Finally, to obtain the third vector eˆ3 , we write: ! ! ! e3 = d3 + β 2 eˆ1 + β 3 eˆ2 !

(4.3-7)

where β 2 and β 3 are chosen so that eˆ3 is orthogonal to eˆ1 and eˆ2 . Therefore we must have: ! ! e3 • eˆ1 = d3 • eˆ1 + β 2 eˆ1 • eˆ1 + β 3 eˆ2 • eˆ1 = 0 ! ! (4.3-8) ! ! e3 • eˆ2 = d3 • eˆ2 + β 2 eˆ1 • eˆ2 + β 3 eˆ2 • eˆ2 = 0 !

! or

(4.3-9)

!

! β 2 = − d3 • eˆ1 !

(4.3-10)

!

! β 3 = − d3 • eˆ2 !

(4.3-11)

(4.3-2)

where β1 is chosen so that eˆ2 is orthogonal to eˆ1 . Therefore we must have: ! ! ! e2 • eˆ1 = d2 • eˆ1 + β1 eˆ1 • eˆ1 = 0 !

! e2 eˆ2 = ! ! e2

so that: (4.3-3)

(4.3-4)

!

! ! ! ! e3 = d3 − d3 • eˆ1 eˆ1 − d3 • eˆ2 eˆ2 !

!

! e3 eˆ3 = ! ! e3

(

)

(

)

(4.3-12) (4.3-13)

so that !

! ! ! e2 = d2 − d2 • eˆ1 eˆ1 !

(

)

(4.3-5)

4.4! !

COORDINATE SYSTEMS For a 3-dimensional coordinate system, a coordinate curve

is obtained when two of the coordinates are held fixed while 162

the third coordinate is allowed to vary continuously. This curve

base vectors of a rectilinear coordinate system are not

is the intersection of the two coordinates surfaces defined by

orthogonal, the coordinate system is called an oblique

holding the two coordinates constant.

coordinate system (see Figures 4-1 and 4-2). Curvilinear

!

coordinate systems are considered to include rectilinear

At any given point P we can construct coordinate axes

using the coordinate curves. These axes are defined by base

coordinate systems as special cases.

vectors. Base vectors are generally not point vectors. Every 3dimensional coordinate system has a basis consisting of three base vectors, although these base vectors may not be orthogonal. If each base vector is tangent to its coordinate ! curve, the base vectors are known as covariant base vectors ei . If each base vector is perpendicular to its coordinate curve, the ! base vectors are known as contravariant base vectors e j (see Section 4.7). !

A coordinate system whose coordinate curves are all

straight lines is called a rectilinear coordinate system. A coordinate system whose coordinate curves are not necessarily all straight lines is called a curvilinear coordinate system.

4.4.1! !

RECTILINEAR

Covariant base vectors of a rectilinear coordinate system

will always be tangent to and collinear with the rectilinear axes. When the base vectors of a rectilinear coordinate system are

Figure 4-1!

Rectangular coordinate system xj . The unit covariant base vectors eˆj are orthogonal vectors.

orthogonal, the coordinate system is called a rectangular coordinate system or a Cartesian coordinate system. When the 163

point along the coordinate curves. Base vectors for curvilinear coordinate systems are generally not constant therefore. A different set of base vectors will correspond to each different point on a coordinate curve.

Figure 4-2!

4.4.2! !

Oblique coordinate system xi′ . The covariant base ! vectors ei′ are not orthogonal and may not be unit vectors.

CURVILINEAR

Covariant base vectors will be tangent to but not collinear

Figure 4-3!

with coordinate curves that are not straight lines (see Figure 4-3). Base vectors for curvilinear coordinate systems having coordinate curves that are not straight lines are considered to be local because base vectors will generally vary from point to

!

A curvilinear coordinate system xi′ . The covariant ! base vectors ei′ are not necessarily orthogonal vectors or unit vectors.

As noted above, base vectors are generally not point

vectors. If the base vectors of a curvilinear coordinate system 164

are always orthogonal, the coordinate system is referred to as an orthogonal curvilinear coordinate system.

4.5!

!

!

The coordinate surfaces of an orthogonal curvilinear

MAPPINGS We will now consider two curvilinear coordinate systems

coordinate system must always intersect at right angles.

xj and xi′ having the same origin. The transformation F from

Therefore the base vectors of orthogonal curvilinear coordinate

one of the coordinate systems to the other is called a mapping:

systems will always intersect at right angles even though they will generally vary along the coordinate curves. Two examples

!

( x1′, x2′ , x3′ ) = F ( x1, x 2 , x 3 ) !

(4.5-1)

of orthogonal curvilinear coordinate systems are the cylindrical

We will assume that, for all useful coordinate systems, this

and spherical coordinate systems (see Table 4-1 and appendix

mapping is always bijective; that is, it is a one-to-one mapping

C).

of points from one coordinate system to the other. The functions that constitute the transformation F are assumed to be

single-valued,

continuous,

and

to

have

continuous

derivatives. If x j is a rectangular coordinate system and if F is linear, then xi′ will be a rectilinear coordinate system; if F is not linear, then xi′ will be a curvilinear coordinate system. When the transformation F is linear, it is referred to as an affine transformation. !

Examples of transformations of nonlinear coordinate

systems are the transformation from the cylindrical coordinate system ( R, φ , z ) to the rectangular coordinate system ( x, y, z ) : Table 4-1! Three important coordinate systems and their coordinate curves.

!

x = R cos φ !

y = R sin φ !

z = z!

(4.5-2)

and the transformation from the spherical coordinate system

( R, θ , φ ) to the rectangular coordinate system ( x, y, z ) :

165

!

x = R sin θ cos φ !

y = R sin θ sin φ !

z = R cosθ ! (4.5-3)

The cylindrical coordinate system ( R, φ , z ) is shown in Figure

vectors eˆR , eˆφ , and eˆθ are all not constant (they change in direction not magnitude).

4-4 and the spherical coordinate system is shown in 4-5.

Figure 4-5! Figure 4-4!

Cylindrical coordinate system and base vectors.

!

For the cylindrical coordinate system, the base vectors eˆR and eˆφ are not constant (they change in direction not magnitude), and for the spherical coordinate system, the base

!

Spherical coordinate system and base vectors.

The transformation from the rectangular coordinate

system ( x, y, z ) to the cylindrical coordinate system ( R, φ , z ) is given by: !

R = x 2 + y2 !

y φ = tan −1 ! x

z = z!

(4.5-4) 166

where

0 ≤ φ ≤ 2π !

!

0 ≤ R < ∞!

!

The transformation from the rectangular coordinate

− ∞ < z < ∞!

(4.5-5)

system ( x, y, z ) to the spherical coordinate system ( R, θ , φ ) is given by: !

R = x + y + z ! θ = tan 2

2

2

−1

x 2 + y2 ! z

y φ = tan ! (4.5-6) x

!

conditions: ! ! ! ei • ej = δ i j !

(4.6-3)

then both the base and the corresponding coordinate system are defined as orthonormal. We see that an orthonormal base is a

0 ≤ R < ∞!

4.6!

vectors, then they may not be dimensionless. ! ! If the base vectors ej are unit vectors and satisfy the

−1

where !

and both the base and the corresponding coordinate system are ! defined as orthogonal. If the base vectors ej are not unit

0 ≤θ ≤ π !

0 ≤ φ ≤ 2π !

special case of an orthogonal base. Since orthonormal base (4.5-7)

BASE VECTORS

! ! ! If three nonzero vectors e1 , e2 , and e3 are linearly

independent, they can form a basis for some coordinate system ! in 3-dimensional space. Each of the vectors ej is then a base ! vector for the coordinate system since any vector A in 3dimensional space written using contravariant components can ! ! be expressed as a linear combination of the three vectors e1 , e2 , ! and e3 as in equation (4.1-1): ! ! ! ! ! ! A = a1 e1 + a 2 e2 + a 3 e3 = a i ei ! (4.6-1) ! ! If the base vectors ej are mutually orthogonal, we have: ! ! ei • ej = 0 ! ! i ≠ j! (4.6-2)

vectors are all unit vectors, equation (4.6-3) can be written as: !

eˆi • eˆj = δ i j !

(4.6-4)

Unit base vectors are always dimensionless. !

The conditions specified by equation (4.6-4) are called the

orthonormality conditions. If the base is orthonormal, we can write equation (4.6-1) in terms of unit base vectors: ! ! A = a i eˆi !

(4.6-5)

The unit base vectors iˆ , ˆj , and kˆ of the rectangular coordinate system form an orthonormal base. Two examples of curvilinear coordinate

systems

having

orthonormal

bases

are

the

cylindrical and the spherical coordinate systems. !

For a right-handed orthonormal coordinate system xi′ , we

have from equation (1.21-7): 167

!

eˆ1′ • eˆ2′ × eˆ3′ = 1 !

(4.6-6)

and, for a left-handed orthonormal coordinate system xi′ , we have similarly from equation (1.21-8): ! !

eˆ1′ • eˆ2′ × eˆ3′ = −1 !

(4.6-7)

handed while the other is left-handed, we have from equations (4.6-7) and (1.18-5):

α 11 α 12

coordinate system rotated with respect to the unprimed coordinate system. If these two coordinate systems have base vectors eˆj and eˆi′ , respectively, we can use equation (1.13-19) to express eˆi′ in terms of eˆj :

eˆi′ = α i j eˆj !

(4.6-8)

where α i j are the direction cosines between the two sets of axes.

α 13

eˆ1′ • eˆ2′ × eˆ3′ = α 21 α 22 α 23 = −1 !

!

We will now consider two orthonormal coordinate

systems xj and xi′ having the same origin with the primed

!

If either orthonormal coordinate system xj or xi′ is right-

(4.6-10)

α 31 α 32 α 33 Therefore, if xj and xi′ are both either right-handed or lefthanded orthonormal coordinate systems, we must have:

{ }

det α i j = 1!

!

(4.6-11)

and if either orthonormal coordinate system xj or xi′ is righthanded while the other is left-handed, we must have:

{ }

det α i j = −1!

!

(4.6-12)

If xj and xi′ are both right-handed or both left-handed

From equations (4.1-4), (4.6-6), and (4.6-7), we see that base

orthonormal coordinate systems, we can use equation (4.6-6)

vectors of orthonormal coordinate systems will always be

and the expression for a triple product in terms of its

linearly independent.

components given in equation (1.18-5) to write:

!

!

α 11 α 12 !

of eˆi′ :

α 13

eˆ1′ • eˆ2′ × eˆ3′ = α 21 α 22 α 23 = 1 !

α 31 α 32 α 33

Using equation (1.13-19), we can also express eˆj in terms

(4.6-9)

!

eˆj = α j′i eˆi′ !

(4.6-13)

where from equation (1.13-5): !

α j′i = α i j !

(4.6-14) 168

!

Finally, we note that for base vectors of two different

coordinate systems xi and xj′ , we will generally have: !

4.7!

eˆi • eˆj′ ≠ δ i j !

(4.6-15)

RECIPROCAL BASE VECTORS

! We will now consider base vectors e j that are the ! reciprocal set to the covariant base vectors ei (see Section 1.20). !j The base vectors e are known as reciprocal base vectors. The ! vectors ei need not be orthonormal or even orthogonal. Since ! the vectors ei are base vectors, however, they must be linearly !

independent and so we must have: ! [ e!1 e!2 e!3 ] ≠ 0 !

(4.7-1)

! ! ! ! ! A = a1 e 1 + a 2 e 2 + a3 e 3 = aj e j !

!

(4.7-3)

where we have used subscript indices for these components ! since they are covariant components of A (as will be shown in ! Section 4.11). We now have for the vector A : ! ! ! ! A = a i ei = aj e j ! (4.7-4) where the vector components a i and aj are contravariant and ! covariant vector components, respectively, of A . !

From the definition of reciprocal vector sets given in

equation (1.20-1), we have: ! ! ei • e j = δ i j ! !

(4.7-5)

Equation (4.7-5) can be considered the definition of a reciprocal

From equation (1.20-7) we also must have:

basis, conjugate basis, or dual basis. Since equation (4.7-5) is a

1 ! ! ! ⎡⎣ e 1 e 2 e 3 ⎤⎦ = ! ! ! ≠ 0 ! [ e1 e2 e3 ]

vector equation, it is valid for all curvilinear coordinate

!

(4.7-2)

From equation (4.1-4) we then know that the reciprocal base ! vectors e j are also linearly independent. ! ! Any point vector A that can be expressed as a linear ! combination of the covariant base vectors ei can also then be expressed as a linear combination of the reciprocal base vectors ! ej:

systems. !

In keeping with the nomenclature for components, the ! base vectors ei will be referred to as covariant base vectors and ! the base vectors e j will be referred to as contravariant base vectors (see Section 4.4). In the rectangular coordinate system, covariant and contravariant base vectors are identical (see Example 1-36).

169

! It is possible to express the reciprocal base vectors e j in ! terms of the base vectors ei . From equations (1.20-3) and (4.7-2) !

Example 4-2 ! ! Write ∇ • B in indicial notation for a curvilinear coordinate system.

we have:

1 ! ! ! ! ! ! ! ! e 1 = ! ! ! e2 × e3 = ⎡⎣ e 1 e 2 e 3 ⎤⎦ e2 × e3 ! [ e1 e2 e3 ]

(4.7-6)

!

1 ! ! ! ! ! ! ! ! e 2 = ! ! ! e3 × e1 = ⎡⎣ e 1 e 2 e 3 ⎤⎦ e3 × e1 ! [ e1 e2 e3 ]

(4.7-7)

!

1 ! ! ! ! ! ! ! ! e 3 = ! ! ! e1 × e2 = ⎡⎣ e 1 e 2 e 3 ⎤⎦ e1 × e2 ! [ e1 e2 e3 ]

(4.7-8)

!

From the gradient definition in equation (2.12-1) we have:

! It is also possible to express the base vectors ei in terms of ! their reciprocal base vectors e j . From equations (1.20-3) and (4.7-2) we have:

!

!

!

!

1 ! ! ! ! ! ! ! ! e1 = !1 ! 2 ! 3 e 2 × e 3 = [ e1 e2 e3 ] e 2 × e 3 ! ⎡⎣ e e e ⎤⎦ 1 ! ! ! ! ! ! ! ! e2 = !1 ! 2 ! 3 e 3 × e 1 = [ e1 e2 e3 ] e 3 × e 1 ! ⎡⎣ e e e ⎤⎦ 1 ! ! ! ! ! ! ! ! e3 = !1 ! 2 ! 3 e 1 × e 2 = [ e1 e2 e3 ] e 1 × e 2 ! ⎡⎣ e e e ⎤⎦

Solution:

(4.7-9)

!

! ! ∂bj ! ! j ∂ ! ! ∇• B = ei • bj e j = ei • e ∂xi ∂xi

From equation (4.7-5) we then have: !

! ! ∂bj ∂b ∇• B = δij = i ∂xi ∂xi

4.8! !

TRANSFORMATIONS BETWEEN BASE VECTORS Base vectors will generally vary from point to point along

the coordinate curves of curvilinear coordinate systems. For a (4.7-10)

curvilinear coordinate system then, base vectors at any given point will generally be different from the base vectors at the coordinate system origin. To determine the transformation from

(4.7-11)

one curvilinear coordinate system to another, we will consider ! the position vector differential d r as expressed with respect to two different curvilinear coordinate systems. In doing so it will be helpful to think of the components of a vector, not 170

geometrically, but as coefficients of expansion of the vector in

point and the partial derivatives ∂xj ∂ xi′ are often presented in

terms of a given base.

textbooks using contravariant notation (superscripts in the

!

We will now assume that the position vector differential

form x j and ∂x j ∂ x ′ i , respectively). We will generally follow

expressed with respect to two different curvilinear

this practice in this book, with the only exceptions being for

! dr

coordinate systems x i and x ′ j can be written in the same form

some (but not all) rectangular coordinate systems.

as is equation (2.4-2) for a rectangular coordinate system: ! ! ! ! ! ! d r = dx1 e1 + dx 2 e2 + dx 3 e3 = dx i ei ! (4.8-1)

!

!

! ! ! ! ! d r = dx ′1 e1′ + dx ′ 2 e2′ + dx ′ 3 e3′ = dx ′ j ej′ !

(4.8-2)

This assumption will be verified in Section 7.1.3. Note that the ! displacement vector d r in equations (4.8-1) and (4.8-2) is the ! same point vector d r represented in two different coordinate ! ! systems having base vectors ei and ej′ . Also note that the base vectors are not necessarily unit vectors. ! ! The vector differential d r in these two curvilinear coordinate systems can be expressed as: ! ! ! ! ∂r ∂r ∂r ! ∂r 1 2 3 d r = 1 dx + 2 dx + 3 dx = i dx i ! ! ∂x ∂x ∂x ∂x ! !

(4.8-3)

! ! ! ! ∂r ∂r ∂r ! ∂r 1 2 3 j dr = d x + d x + d x = ′ ′ ′ 1 2 3 j dx ′ ! (4.8-4) ∂ x′ ∂ x′ ∂ x′ ∂ x′ Since position vectors that are expressed in terms of

covariant base vectors will have contravariant components that may be coordinates (see Example 4-10), the coordinates xj of a

Comparing equations (4.8-3) and (4.8-4) with equations ! (4.8-1) and (4.8-2), respectively, we see that the base vectors ei ! and ej′ are given by: ! ! ∂r ! ei = i ! (4.8-5) ∂x !

! ∂r ! ! ej′ = ∂ x′ j

We therefore can write: ! ∂r ∂ x ′ j ! ! ! ei = ∂ x ′ j ∂x i !

! ∂r ∂x i ! ! ej′ = i ∂x ∂ x ′ j

(4.8-6)

(4.8-7)

(4.8-8)

and so the transformations from one set of base vectors (for a curvilinear coordinate system) to another set of base vectors (for another curvilinear coordinate system) are: ! j ∂r ! ∂ x′ ! ! ei = ej′ = i ! ∂x i ∂x

(4.8-9) 171

! ∂x i ! ∂r ! ej′ = ei = ! ! (4.8-10) ∂ x′ j ∂ x′ j ! ! Since the vectors e!i and ej′ are base vectors for curvilinear coordinate systems, these base vectors need not be either

The transformation from base vectors of a rectangular coordinate system to base vectors of a cylindrical coordinate system can be obtained using equations (4.8-10) and (4.5-2). We have then:

⎡ eˆR ⎢ ⎢ eˆφ ⎢ ˆ ⎢⎣ ez

orthonormal or orthogonal. The transformation from one set of base vectors to the other can be seen to have the same form as

!

the transformation of covariant components of vectors given in equation (1.15-5). Base vectors are not covariant components of vectors, however, nor are they generally point vectors.

! ! ∂r Show that ei = i provides the correct base vectors for the ∂x rectangular coordinate system. Solution: The position vector in rectangular coordinates is: ! ! r = x iˆ + y ˆj + z kˆ ! ! ∂r ˆ ! e2 = =j ∂y

! ! ∂r ˆ e3 = =k ∂z

These are the correct base vectors for the rectangular coordinate system.

(4.8-11)

as is shown in Example 4-4. Similarly, the transformation from base vectors of a

Example 4-3

Therefore: ! ! ∂r ˆ ! e1 = =i! ∂x

⎤ ⎡ cosφ sin φ 0 ⎤ ⎡ eˆx ⎤ ⎥ ⎢ ⎥ ⎥⎢ ⎥ = ⎢ − sin φ cosφ 0 ⎥ ⎢ eˆy ⎥ ! ⎥ ⎢ ⎢ ⎥ 0 0 1 ⎥⎦ ⎢ eˆz ⎥ ⎥⎦ ⎣ ⎣ ⎦

rectangular coordinate system to base vectors of a spherical coordinate system can be obtained using equations (4.8-10) and (4.5-3):

⎡ eˆR ⎢ ! ⎢ eˆθ ⎢ eˆ ⎢⎣ φ

⎤ ⎡ sin θ cosφ sin θ sin φ cosθ ⎥ ⎢ ⎥ = ⎢ cosθ cosφ cosθ sin φ − sin θ ⎥ ⎢ − sin φ cosφ 0 ⎥⎦ ⎣

⎤ ⎡ eˆx ⎤ ⎥ ⎥⎢ ⎥ ⎢ eˆy ⎥ ! (4.8-12) ⎥ ⎢ eˆ ⎥ ⎦ ⎢⎣ z ⎥⎦

We can also obtain the transformation from base vectors of a cylindrical coordinate system to base vectors of a rectangular coordinate system:

!

⎡ ⎢ ⎢ ⎢ ⎢⎣

eˆx ⎤ ⎡ cosφ − sin φ 0 ⎤ ⎡ eˆR ⎥ ⎢ ⎥⎢ eˆy ⎥ = ⎢ sin φ cosφ 0 ⎥ ⎢ eˆφ ⎥ ⎢ eˆz ⎥ ⎢⎣ 0 0 1 ⎥⎦ ⎢ eˆz ⎦ ⎣

⎤ ⎥ ⎥! ⎥ ⎥⎦

(4.8-13)

172

and the transformation from base vectors of a spherical coordinate system to base vectors of a rectangular coordinate

!

system:

!

⎡ ⎢ ⎢ ⎢ ⎢⎣

eˆx ⎤ ⎡ sin θ cosφ cosθ cosφ − sin φ ⎥ ⎢ eˆy ⎥ = ⎢ sin θ sin φ cosθ sin φ cosφ ⎥ cosθ − sin θ 0 eˆz ⎥ ⎢⎣ ⎦

⎤⎡ ⎥⎢ ⎥⎢ ⎥⎢ ⎦ ⎢⎣

eˆR ⎤ ⎥ eˆθ ⎥ ! (4.8-14) eˆφ ⎥ ⎥⎦

! !

! ∂x ˆ ∂y ˆ ∂z ˆ ! ! ez = ex + ey + ez = ( 0 ) eˆx + ( 0 ) eˆy + (1) ez = ez ∂z ∂z ∂z ! ez eˆz = ! = eˆz ez or in matrix form:

Equations (4.8-13) and (4.8-14) are obtained by inverting the matrix of coefficients in equations (4.8-11) and (4.8-12), respectively (see Example 4-5).

! − R sin φ eˆx + R cosφ eˆy eφ eˆφ = ! = = − sin φ eˆx + cosφ eˆy 2 2 2 2 eφ ( R ) sin φ + ( R ) cos φ

!

Example 4-4

⎡ eˆR ⎢ ⎢ eˆφ ⎢ ˆ ⎢⎣ ez

⎤ ⎡ cosφ sin φ 0 ⎤ ⎡ eˆx ⎤ ⎥ ⎢ ⎥ ⎥⎢ ⎥ = ⎢ − sin φ cosφ 0 ⎥ ⎢ eˆy ⎥ ⎥ ⎢ ⎢ ⎥ 0 0 1 ⎥⎦ ⎢ eˆz ⎥ ⎥⎦ ⎣ ⎣ ⎦

Determine the transformation from base vectors of the rectangular coordinate system ( x, y, z ) to base vectors of the

Example 4-5

cylindrical coordinate system ( R, φ , z ) .

! !

!

Determine the transformation from base vectors of the

Solution:

cylindrical coordinate system to base vectors of the

Using equations (4.8-10) and (4.5-2), we have:

rectangular coordinate system.

∂x ∂y ∂z ! eR = eˆx + eˆy + eˆz = cosφ eˆx + sin φ eˆy + ( 0 ) eˆz ∂R ∂R ∂R ! cosφ eˆx + sin φ eˆy e ˆeR = !R = = cosφ eˆx + sin φ eˆy 2 2 eR cos φ + sin φ ∂y ∂z ! ∂x eφ = eˆx + eˆy + eˆz = − R sin φ eˆx + R cosφ eˆy + ( 0 ) eˆz ∂φ ∂φ ∂φ

Solution: Inverting the matrix of coefficients obtained in Example 4-4:

!

⎡ ⎢ ⎢ ⎢ ⎢⎣

eˆx ⎤ ⎡ cosφ − sin φ 0 ⎤ ⎡ eˆR ⎤ ⎥ ⎢ ⎥ ⎥⎢ eˆy ⎥ = ⎢ sin φ cosφ 0 ⎥ ⎢ eˆφ ⎥ ⎥ ⎢ ⎥ eˆz ⎥ ⎢⎣ 0 0 1 ⎥⎦ ⎢ eˆz ⎥ ⎦ ⎣ ⎦ 173

and so: !

eˆx = cosφ eˆR − sin φ eˆφ

!

eˆy = sin φ eˆR + cosφ eˆφ

!

eˆz = eˆz

x′ k ) to another set of reciprocal base vectors (for a curvilinear ! ! coordinate system x i ). Let ei be the base vectors and e j be their ! reciprocal base vectors for the coordinate system x i , and let ek′ ! be the base vectors and e′ l be their reciprocal base vectors for the coordinate system x ′ k . We then have: ! ! ! e j = βjl e′ l !

Example 4-6 Show that the base vectors of the cylindrical coordinate system are orthogonal.

From Example 4-4, we have:

(

! !

)(

eˆR • eˆφ = cosφ eˆx + sin φ eˆy • − sin φ eˆx + cosφ eˆy

)

= − cos φ sin φ + sin φ cos φ = 0

!

( = ( − sin φ eˆ

)

eˆR • eˆz = cosφ eˆx + sin φ eˆy • eˆz = 0 eˆφ • eˆz

x

)

+ cosφ eˆy • eˆz = 0

Therefore eˆR , eˆφ , and eˆz are orthogonal to each other.

4.9! !

where the βjl coefficients must be determined. Using equations (4.7-5) and (4.8-9), we can write: k ∂ x′ k ! ! ! ! l ∂ x′ ! !l ! (4.9-2) ei • e j = e • β e = ′ ′ k jl i i β jl ek′ • e′ = δ i j ∂x ∂x ! ! Since the base vectors ek′ and e′ l are reciprocal, we have from

!

Solution:

!

(4.9-1)

TRANSFORMATIONS BETWEEN RECIPROCAL BASE VECTORS We will now determine the transformations from one set

of reciprocal base vectors (for a curvilinear coordinate system

equation (4.7-5): ! ! ! ek′ • e′ l = δ kl !

(4.9-3)

and so: !

∂ x′ k βjl δ kl = δ i j ! ∂x i

(4.9-4)

∂x j ! ∂ x′l

(4.9-5)

or !

βjl =

Using equation (4.9-1), the transformation equations for the ! reciprocal base vectors e j are: 174

!

∂x j ! l ! ej= e′ ! ∂ x′l

(4.9-6)

!

l ! l ∂ x′ ! j e′ = e ! ∂x j

(4.9-7)

4.10!

!

!

For a curvilinear coordinate system, any covariant base ! vector ei is always tangent to its respective x i coordinate curve. To obtain the unit base vector eˆi tangent to the x i coordinate i

(no sum)!

(4.10-3)

Using equation (4.10-3) we can rewrite equation (4.8-3) in the form: !

SCALE FACTORS

! ! ∂r ! ei = i = ei eˆi = hi eˆi ! ∂x

! ! ∂r d r = i dx i = h1 dx1 eˆ1 + h2 dx 2 eˆ2 + h3 dx 3 eˆ3 ! ∂x

(4.10-4)

! Equation (4.10-4) expresses the distance differential d r in terms of coordinate differentials and unit base vectors for a curvilinear coordinate system. The scale factors hi are necessary

curve in the direction of increasing x , we can use equation

for curvilinear coordinate systems since a differential change in

(4.8-5) to write:

a given coordinate is not necessarily equal to the change in the

!

! ∂r ! ! ei 1 ∂r 1 ! ∂x i eˆi = ! = ! = = ei ! hi ∂xi hi ∂r ei i ∂x

spatial element along the coordinate curve. The scale factors (no sum)!

(4.10-1)

differential. The scale factors hi provide the units of distance to the coordinate differentials. !

where !

then act to convert a coordinate differential into a distance

! ! ∂r e ! hi = i = i ∂x

(4.10-2)

The quantities h1 , h2 , and h3 are called scale factors. When a base vector of a curvilinear coordinate system is divided by its scale factor, a unit base vector is obtained as shown in equation (4.10-1). From equations (4.10-1) and (4.10-2), we have:

! !

We can also write using equations (4.10-3) and (4.8-5): ! ! ! ! ∂ r ∂ r hi eˆi • hi eˆi = hi hi eˆi • eˆi = hi hi = ei • ei = i • i ! ∂x ∂x ! (no sum)! (4.10-5)

and so for a curvilinear coordinate system: ! ! ! ! ! ! ∂r ∂r ∂r ! hi = ei • ei = ei = ! (no sum)! (4.10-6) • = ∂x i ∂x i ∂x i

175

!

Solution:

If x i is a curvilinear coordinate system and xj′ is a

rectangular coordinate system, we then have from equation

For the cylindrical coordinate system we have from

(4.8-9):

equations (4.10-8) and (4.5-2):

!

! ∂r ∂ xj′ eˆj′ ! = ∂x i ∂x i

(4.10-7)

2

!

2

2

hR =

⎡ ∂x ⎤ ⎡ ∂y ⎤ ⎡ ∂z ⎤ ⎢⎣ ∂R ⎥⎦ + ⎢⎣ ∂R ⎥⎦ + ⎢⎣ ∂R ⎥⎦ =

hφ =

⎡ ∂x ⎤ ⎡ ∂y ⎤ ⎡ ∂z ⎤ ⎢ ∂φ ⎥ + ⎢ ∂φ ⎥ + ⎢ ∂φ ⎥ = ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

hz =

⎡ ∂x ⎤ ⎡ ∂y ⎤ ⎡ ∂z ⎤ = + + ⎣⎢ ∂z ⎥⎦ ⎣⎢ ∂z ⎥⎦ ⎢⎣ ∂z ⎥⎦

cos 2 φ + sin 2 φ = 1

and so equation (4.10-6) becomes: 2

2

!

hi =

2

2

⎡ ∂ x1′ ⎤ ⎡ ∂ x2′ ⎤ ⎡ ∂ x3′ ⎤ ! ⎢⎣ ∂x i ⎥⎦ + ⎢⎣ ∂x i ⎥⎦ + ⎣⎢ ∂x i ⎥⎦

(no sum)! (4.10-8)

The scale factors for the rectangular, cylindrical, and spherical

!

2

!

coordinate systems are given in Table 4-2 (see Example 4-7).

2

2

2

( R )2 sin 2 φ + ( R )2 cos2 φ

=R

2

1 =1

Therefore, while the differential changes in R , φ , and z for the cylindrical coordinate system are dR , dφ , and dz , respectively, the differential changes in a spatial element along the coordinate curves are dR , R dφ , and dz , respectively. For the spherical coordinate system we have from equations (4.10-8) and (4.5-3):

Table 4-2! Scale factors for three coordinate systems.

2

Example 4-7 Determine the scale factors for the cylindrical coordinate system and for the spherical coordinate system.

!

hR =

!

=

2

⎡ ∂x ⎤ ⎡ ∂y ⎤ ⎡ ∂z ⎤ ⎢⎣ ∂R ⎥⎦ + ⎢⎣ ∂R ⎥⎦ + ⎢⎣ ∂R ⎥⎦

2

sin 2 θ cos 2 φ + sin 2 θ sin 2 φ + cos 2 θ = 1 176

2

!

hθ =

!

=

hφ =

!

=

2

!

2

2

2

2

⎡ ∂x ⎤ ⎡ ∂y ⎤ ⎡ ∂z ⎤ ⎢ ∂φ ⎥ + ⎢ ∂φ ⎥ + ⎢ ∂φ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

! d r = dR eˆR + R dφ eˆφ + dz eˆz For the spherical coordinate system we have from equation

R cos θ cos φ + R cos θ sin φ + R sin θ = R 2

2

!

2

⎡ ∂x ⎤ ⎡ ∂y ⎤ ⎡ ∂z ⎤ ⎢⎣ ∂θ ⎥⎦ + ⎢⎣ ∂θ ⎥⎦ + ⎢⎣ ∂θ ⎥⎦

2

2

2

2

2

R 2 sin 2 θ sin 2 φ + R 2 sin 2 θ cos 2 φ = R sin θ

(4.10-4) and table (4-2): ! ! d r = dR eˆR + R dθ eˆθ + Rsin θ dφ eˆφ

4.11!

Therefore, while the differential changes in R , θ , and φ for

COMPONENTS OF VECTORS IN CURVILINEAR COORDINATE SYSTEMS

! For a point vector A in a curvilinear coordinate system,

the spherical coordinate system are dR , dθ , and dφ ,

!

respectively, the differential changes in a spatial element

we have from equation (4.7-4): ! ! ! ! A = a i ei = aj e j !

along the coordinate curves are dR , R dθ , and R sin θ dφ , respectively. Example 4-8 ! Express d r in terms of unit base vectors for the cylindrical coordinate system and for the spherical coordinate system. Solution: From equation (4.10-4) we have: ! d r = h1 dx1 eˆ1 + h2 dx 2 eˆ2 + h3 dx 3 eˆ3 ! For the cylindrical coordinate system we have from

(4.11-1)

The components a i and aj can be obtained using a similar procedure to that given in equation (1.16-22) for rectangular coordinate systems. Changing the dummy index in equation (4.11-1) and using equation (4.7-5), we can write: ! ! ! ! ! A • e i = a k ek • e i = a k δ ki = a i ! !

! ! ! ! A • ej = ak e k • ej = ak δ k j = aj !

Therefore we have: ! ! ! ! ! ! ! ! A = A • e i ei = A • ej e j !

(

)

(

)

(4.11-2) (4.11-3)

(4.11-4)

equation (4.10-4) and table (4-2): 177

which can be compared to equation (1.16-24). !

aj

i

In equations (4.11-2) and (4.11-3) the components a and ! of the point vector A are contravariant and covariant

components, respectively. We will now verify this by showing that they follow the same transformation laws as given in

!

aj

∂x j ! i !i i e′ = ai′ e′ ! ∂ x′

or !

ai′ =

equations (1.15-2) and (1.15-5), respectively. From equation (4.11-1) we can write: ! ! ! A = a i ei = a′ j ej′ ! !

(4.11-9)

∂x j aj ! ∂ x′ i

(4.11-10)

Comparing equation (4.11-10) with equation (1.15-5), we see (4.11-5)

! where the same point vector A (which is independent of any coordinate system) is represented in two different curvilinear

that the components aj transform as and so are covariant components. !

! The contravariant and covariant components of a vector A

in a curvilinear coordinate system are given, therefore, by

coordinate systems. From equations (4.11-5) and (4.8-9) we then

equations (4.11-2) and (4.11-3), respectively. Contravariant and

have:

covariant components of vectors generally are not equal in

!

j ! i ∂ x′ ! j ! ! a ei = a i ej′ = a′ ej′ ∂x i

(4.11-6)

contravariant and covariant vector components can be seen geometrically by using a two-dimensional non-orthogonal

or !

curvilinear coordinate systems. The differences between

∂ x′ j a′ = i a i ! ∂x j

coordinate system as shown in Figure 4-6. Contravariant (4.11-7)

Comparing equation (4.11-7) with equation (1.15-1), we see that the components a i transform as and therefore are contravariant components. Using equation (4.11-1) again, we can write: ! ! ! ! A = aj e j = ai′ e′ i ! (4.11-8)

components correspond to parallel projections on coordinate axes, while covariant components correspond to perpendicular projections on coordinate axes. For this reason, contravariant components and covariant components are sometimes called parallel projection components and perpendicular projection components, respectively.

From equations (4.11-8) and (4.9-6) we then have: 178

! components aj e j

(no sum). These components can be

obtained by using equation (4.11-11) to form the scalar products: !

! ! A • eˆi = a i ei !

(no sum)!

(4.11-12)

!

! ! A • eˆj = aj e j !

(no sum)!

(4.11-13)

!

From Example 1-36 we know that orthogonal unit base

vectors are their own reciprocal. For orthonormal bases, therefore, we can rewrite equation (4.11-4) as: ! ! ! ! ! ! A = A • eˆ i eˆi = A • eˆj eˆ j = ( A • eˆi ) eˆi = A • eˆj eˆj ! (4.11-14)

(

Figure 4-6!

Contravariant and covariant components of a ! vector A in two dimensions.

!

! A point vector A in curvilinear coordinates as given by

equation (4.11-1) can also be written in terms of unit base vectors: !

! ! ! ! A = a1 e1 eˆ1 + a 2 e2 eˆ2 + a 3 e3 eˆ3

! ! ! = a1 e 1 eˆ1 + a2 e 2 eˆ 2 + a3 e 3 eˆ 3 ! (4.11-11) ! Expressed in the form of equation (4.11-11), a vector A has contravariant components a i e!i (no sum) or covariant

!

(

)

(

)

4.11.2! POSITION VECTORS !

4.11.1! UNIT BASE VECTORS

)

Position vectors can be obtained for cylindrical coordinate

systems (see Examples 4-9 and 4-10): ! ! r = R eˆ R + z eˆ z !

(4.11-15)

and spherical coordinate systems (see Example 4-11): ! ! r = R eˆ R ! (4.11-16) Note that the coordinates of a point are not the components of the corresponding position vector for curvilinear or spherical coordinate systems.

179

Example 4-9

!

! Determine the position vector r of a point P ( R, φ , z ) in

! r = rR eˆ R + rφ eˆφ + rz eˆ z = R eˆ R + z eˆ z

The components of the position vector are ( R, 0, z ) while the coordinates of the point P are ( R, φ , z ) . In the cylindrical ! coordinate system r has no component in the eˆφ direction.

terms of covariant components for a cylindrical coordinate

system. Compare with the coordinates ( R, φ , z ) of the point

P. Solution:

Example 4-10

! Using equation (4.11-14) the position vector r can be written

! Determine the position vector r of a point P in terms of the

in the form:

contravariant components with respect to a cylindrical

(

)

! ! ! ! r = rR eˆ R + rφ eˆφ + rz eˆ z = ( r • eˆR ) eˆ R + r • eˆφ eˆφ + ( r • eˆz ) eˆ z

!

coordinate system by transforming the components from a rectangular coordinate system.

! In rectangular coordinates the position vector r is given by: ! ! r = x eˆ x + y eˆ y + z eˆ z = x eˆx + y eˆy + z eˆz From equation (4.5-2) we then have: ! ! r = R cosφ eˆx + R sin φ eˆy + z eˆz

Solution:

! In rectangular coordinates, r contravariant components by: ! ! r = x eˆx + y eˆy + z eˆz

Using equation (4.8-11), we obtain:

(

! !

)( ) ! r = r • eˆ = ( R cosφ eˆ + R sin φ eˆ + z eˆ ) • ( − sin φ eˆ + cosφ eˆ ) = 0 ! r = r • eˆ = ( R cosφ eˆ + R sin φ eˆ + z eˆ ) • eˆ = z

From the transformation law for contravariant components

! rR = r • eˆR = R cosφ eˆx + R sin φ eˆy + z eˆz • cosφ eˆx + sin φ eˆy = R

!

φ

φ

z

x

z

y

x

z

y

is given in terms of

x

z

y

z

! The position vector r in terms of covariant components

given in equation (1.15-1), we have: ! !

∂R ∂R ! ⎡ ∂R r=⎢ x+ y+ ∂y ∂z ⎣ ∂x

⎤ ⎡ ∂φ ∂φ ∂φ z ⎥ eˆR + ⎢ x+ y+ ∂y ∂z ⎦ ⎣ ∂x

⎤ z ⎥ eˆφ ⎦

⎡ ∂z ∂z ∂z ⎢ ∂x x + ∂y y + ∂z ⎣

⎤ z ⎥ eˆz ⎦

with respect to a cylindrical coordinate system is given by: 180

Using the transformation relations for base vectors given in equations (4.8-9) and (4.8-13), we also have:

!

⎡ ⎢ ⎢ ⎢ ⎢⎣

eˆx eˆy eˆz

⎡ ⎢ ⎤ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥⎦ ⎢ ⎢ ⎣

∂R ∂x ∂R ∂y

∂φ ∂x ∂φ ∂y

∂R ∂z

∂φ ∂z

⎤ ⎥ ⎥⎡ ⎥⎢ ⎥⎢ ⎥⎢ ∂z ⎥ ⎢⎣ ∂z ⎥⎦

∂z ∂x ∂z ∂y

Therefore the position vector in terms of contravariant components for a cylindrical coordinate system is: ! r = r R eˆR + rφ eˆφ + r z eˆz = R eˆR + z eˆz !

eˆR ⎤ ⎡ cosφ − sin φ 0 ⎤ ⎡ eˆR ⎤ ⎥ ⎢ ⎥ ⎥⎢ eˆφ ⎥ = ⎢ sin φ cosφ 0 ⎥ ⎢ eˆφ ⎥ ⎥ ⎢ ⎥ eˆz ⎥ ⎢⎣ 0 0 1 ⎥⎦ ⎢ eˆz ⎥ ⎦ ⎣ ⎦

Example 4-11

! Determine the position vector r of a point P in terms of covariant components for a spherical coordinate system. Compare with the coordinates ( R, θ , φ ) of the point P .

Therefore we must have: !

∂R = cos φ ! ∂x

∂φ = − sin φ ! ∂x

∂z =0 ∂x

!

∂R = sin φ ! ∂y

∂φ = cos φ ! ∂y

∂z =0 ∂y

!

∂R = 0! ∂z

∂φ = 0! ∂z

∂z =1 ∂z

We then have: ! ! r = ( x cosφ + y sin φ ) eˆR + ( − x sin φ + y cosφ ) eˆφ + z eˆz Using equation (4.5-2), we obtain:

(

)

! r = R cos 2 φ + R sin 2 φ eˆR + ( − R sin φ cosφ + R sin φ cosφ ) eˆφ + z eˆz

or ! r = R eˆR + z eˆz

Solution:

! Using equation (4.11-14) the position vector r can be written in the form: !

(

)

! ! ! ! r = rR eˆ R + rθ eˆθ + rφ eˆφ = ( r • eˆR ) eˆ R + ( r • eˆθ ) eˆθ + r • eˆφ eˆφ

! In rectangular coordinates the position vector r is given by: ! ! r = x eˆ x + y eˆ y + z eˆ z = x eˆx + y eˆy + z eˆz From equation (4.5-3) we then have: ! ! r = R sin θ cosφ eˆx + R sin θ sin φ eˆy + R cosθ eˆz Using equation (4.8-12), we obtain: ! !

(

! rR = r • eˆR = R sin θ cosφ eˆx + R sin θ sin φ eˆy + R cosθ eˆz

(

)

• sin θ cosφ eˆx + sin θ sin φ eˆy + cosθ

) 181

!

rR = R sin 2θ cos 2φ + R sin 2θ sin 2 φ + R cos 2θ = R

!

! rθ = r • eˆθ = R sin θ cosφ eˆx + R sin θ sin φ eˆy + R cosθ eˆz

(

(

units that are easily understood in terms of actual physical

)

• cosθ cosφ eˆx + cosθ sin φ eˆy − sin θ eˆz

! !

rθ = R sin θ cosθ cos 2φ + R sin θ cosθ sin 2φ − R sin θ cosθ = 0

!

rφ = R sin θ cosφ eˆx + R sin θ sin φ eˆy + R cosθ eˆz

(

(

• − sin φ eˆx + cosφ eˆy

! !

!

)

measurements. The single component (magnitude) of a scalar is

)

exist over the units of a scalar component. The components of point vectors are, however, dependent upon the coordinate

)

rφ = −R sin θ sin φ cosφ + R sin θ sin φ cosφ = 0

system (even though point vectors themselves are not). In a ! ! curvilinear coordinate system the base vectors ei and e j are not always unit vectors. Therefore, the units of the components a i ! and aj of a point vector A as given in equation (4.11-1) can be ! different from each other and from those of the vector field A .

Therefore the position vector in terms of covariant

!

components with respect to a spherical coordinate system is

component whose scale units correspond to the physical units

given by:

of the vector field. Such components are known as physical ! components. The physical components of a point vector A are

! r = rR eˆ R + rθ eˆθ + rφ eˆφ = R eˆ R

The components of the position vector are ( R, 0, 0 ) while the

coordinates of the point P are ( R, θ , φ ) . In the spherical ! coordinate system r has no components in the eˆθ and eˆφ directions.

4.12! !

independent of any coordinate system, and so no confusion can

PHYSICAL COMPONENTS OF VECTORS

For physical fields both scalars and vectors represent

physical quantities, and so both scalars and vectors have scale

It is often useful to calculate a type of point vector

those vector components that are coefficients of unit base vectors tangent to the coordinate curves. Since the covariant unit base vectors eˆi are tangent to the coordinate curves, we can write: !

! A = au eˆ1 + av eˆ2 + aw eˆ3 !

(4.12-1)

where au , av , and aw are the physical components of a vector ! A . From equation (4.11-11), we see that: ! ! ! ! ! A = a1 e1 eˆ1 + a 2 e2 eˆ2 + a 3 e3 eˆ3 = au eˆ1 + av eˆ2 + aw eˆ3 ! (4.12-2) and so: 182

! au = e1 a1 !

!

! av = e2 a 2 !

! aw = e3 a 3 !

Example 4-13

(4.12-3)

Obtain ( ds ) for the cylindrical and spherical coordinate 2

or, using equation (4.10-2):

au = h1 a1 !

!

av = h2 a 2 !

aw = h3 a 3 !

Solution:

Therefore: ! ! A = h1 a1 eˆ1 + h2 a 2 eˆ2 + h3 a 3 eˆ3 !

known as physical components or anholonomic components. Example 4-12

( ds )2

From Table 4-2 and Example 4-12, we have for the

(4.12-5)

The vector components in equations (4.12-3) and (4.12-4) are

Obtain

systems.

(4.12-4)

cylindrical coordinate system: !

hR = 1 !

!

( ds )2 = ( hi dx i )

in terms of physical components for an

2

hz = 1

= ( dR ) + ( R ) ( dφ ) + ( dz ) 2

2

2

2

From Table 4-2 and Example 4-12, we have for the spherical

orthogonal curvilinear coordinate system.

coordinate system:

Solution: From equation (4.10-4) we have: ! ! d r = h1 dx1 eˆ1 + h 2 dx 2 eˆ2 + h3 dx 3 eˆ3

!

hR = 1 !

!

( ds )2 = ( hi dx i )

hφ = R sin θ

hθ = R ! 2

= ( dR ) + ( R ) ( dθ ) + ( R ) sin 2θ ( dφ ) 2

2

2

2

2

Comparing with equations (4.12-4) and (4.12-5), we see that ! the physical components of d r are hi dx i (no sum). The

4.12.1! NOTATION FOR PHYSICAL COMPONENTS

differential arc length along each coordinate curve xi is then

the appropriate index in an angular bracket

i

given by hi dx (no sum). We also have: !

hφ = R !

( ds )2 = d r! • d r! = ( h1 dx1 )

2

(

+ h 2 dx 2

) + (h 2

!

Physical components are sometimes designated by placing as a

superscript. Using this notation, equation (4.12-4) becomes: 3 dx

)

3 2

! a 1 ≡ au = h1 a1 !

a 2 ≡ av = h2 a 2 !

a 3 ≡ aw = h3 a 3 ! (4.12-6)

or 183

!

a i = hi a i !

(no sum)!

We can then write equation (4.12-1) as: ! ! A = a 1 eˆ1 + a 2 eˆ2 + a 3 eˆ3 = a i eˆi !

!

(4.12-7)

4.12.2! TRANSFORMATION LAW FOR PHYSICAL COMPONENTS

(4.12-8)

!

Example 4-14

coordinate system x j to another curvilinear coordinate system

Given the base vectors:

x′ i . Using equation (4.12-7) we can write:

! e1 = 2 iˆ !

! ! e2 = 3 ˆj ! e3 = kˆ ! ! ! ! express the vector A = a1 e1 + a 2 e2 + a 3 e3 in terms of physical components. Solution:

h1 = 2 iˆ = 2 !

h2 = 3 ˆj = 3 !

h3 = kˆ = 1

a 1 = 2 a1 !

a 2 = 3a 2 !

and so from equation (4.12-8): ! ! A = 2 a1 eˆ1 + 3a 2 eˆ2 + a 3 eˆ3

a′ i = hi′ a′ i !

(no sum)!

(4.12-9)

! Transforming the contravariant component of A using equation (1.15-2): ∂ x′i j a′ = hi′ a ! ∂x j i

(no sum on i )!

(4.12-10)

Using equation (4.12-7) to transform a j , we obtain the transformation law for the physical components of point vectors:

From equation (4.12-6): !

!

!

From equation (4.10-2): !

We will now determine the law for transforming physical ! components of a point vector A from one curvilinear

a 3 = a3

!

a′

i

hi′ ∂ x ′ i = a hj ∂x j

j

!

(no sum for h )!

(4.12-11)

Note that the physical components of vectors do not transform as either covariant or contravariant point vector components.

184

!

Example 4-15

! If a i are contravariant components of a point vector A expressed in the cylindrical coordinate system, determine

! r = R eˆ R + z eˆz

From Example 4-15 we have: !

the corresponding physical components.

r1 = r

R

= R!

r

2

= r φ = 0!

r

3

= rz =z

! The position vector r in terms of physical components for a cylindrical coordinate system is then:

Solution: The physical components can be obtained from equation

!

! r =r

R

eˆ R + r φ eˆφ + r z eˆz = R eˆ R + z eˆz

(4.12-7): !

a i = hi a i !

4.13!

(no sum)

!

where from Example 4-7, we have: !

hR = 1!

hφ = R !

hz = 1

and so: !

a

1

= a1!

JACOBIANS OF TRANSFORMATIONS

In matrix form the transformation of the contravariant ! components of a vector A from a curvilinear coordinate system

x i to a curvilinear coordinate system x′ j as given in equation (4.11-7) is:

a

2

= Ra ! 2

a

3

= a3

Example 4-16

! Determine the position vector r of a point P in terms of physical components for a cylindrical coordinate system. Solution:

! From Example 4-10, the position vector r in terms of contravariant components for a cylindrical coordinate

!

⎡ a′1 ⎢ 2 ⎢ a′ ⎢ a′ 3 ⎣

⎡ ⎢ ⎤ ⎢ ⎥ ⎢ ⎥ =⎢ ⎥ ⎢ ⎦ ⎢ ⎢ ⎢⎣

∂ x ′1 ∂x1 ∂ x′ 2 ∂x1 ∂ x′ 3 ∂x1

∂ x ′1 ∂x 2 ∂ x′ 2 ∂x 2 ∂ x′ 3 ∂x 2

∂ x ′1 ∂x 3 ∂ x′ 2 ∂x 3 ∂ x′ 3 ∂x 3

⎤ ⎥ ⎥⎡ 1 ⎥⎢ a ⎥ ⎢ a2 ⎥⎢ 3 ⎥⎣ a ⎥ ⎥⎦

⎤ ⎥ ⎥! ⎥ ⎦

(4.13-1)

The determinant of the matrix formed by ∂ xj′ ∂xi is called the Jacobian J of the transformation:

system is given by: 185

∂ x ′1 ∂x1 ⎧ ∂ x′ j ⎫ ∂ x′ 2 J = det ⎨ i ⎬ = ∂x1 ⎩ ∂x ⎭ ∂ x′ 3 ∂x1

!

∂ x ′1 ∂x 2 ∂ x′ 2 ∂x 2 ∂ x′ 3 ∂x 2

!

∂ x ′1 ∂x 3 ∂ x′ 2 ! ∂x 3 ∂ x′ 3 ∂x 3

The Jacobian J

permutation symbol (4.13-2)

can also be expressed using the

ε i j k . From equations (1.18-20) and

(1.18-22), we see that the determinant in equation (4.13-2) can be expanded in the form: !

∂ x′i ∂ x′ j ∂ x′ k J= 1 ε i′j k ! ∂x ∂x 2 ∂x 3

(4.13-6)

Equation (4.13-2) is sometimes written using other notations

!

such as:

and x i is a right-handed curvilinear coordinate system, we

(

!

)

have from equation (4.8-9):

1 2 3 ⎡ x ′1, x ′ 2 , x ′ 3 ⎤ ∂ x ′ , x ′ , x ′ ! J = J⎢ 1 2 3 ⎥= 1 2 3 ⎣ x ,x ,x ⎦ ∂ x ,x ,x

!

(

)

(4.13-3) !

If two or more successive transformations are made, the

Jacobians of these individual transformations combine like the “chain-rule” product of calculus to form the new Jacobian: !

(

∂ x ′′1, x ′′ 2 , x ′′ 3

(

∂ x ,x ,x 1

2

3

)

)= (

∂ x ′′1, x ′′ 2 , x ′′ 3

(

∂ x′ , x′ , x′ 1

2

3

)

) (

∂ x ′1, x ′ 2 , x ′ 3

(

∂ x ,x ,x 1

If x ′ j is a right-handed orthonormal coordinate system,

2

3

)

)!

(4.13-4)

(4.13-7)

The Jacobian for the transformation from an orthonormal coordinate system x ′ j to a curvilinear coordinate system x i can be written using equations (4.13-2), (4.13-7), (1.18-3), and (1.18-19): !

or

! ∂ x′ 2 ∂ x′ 3 ∂r ∂ x ′1 ˆ ˆ = e + e + eˆ3′ ! ′ ′ 1 2 ∂x i ∂x i ∂x i ∂x i

J=

! ! ! ! ! ! ∂r ∂r ∂r ⎡ ∂r ∂r ∂r ⎤ • × = ! ∂x1 ∂x 2 ∂x 3 ⎢⎣ ∂x1 ∂x 2 ∂x 3 ⎥⎦

(4.13-8)

The Jacobian of the product transformation is equal, therefore,

or from equation (4.10-3): ! ! ! ! J = e1 • e2 × e3 = h1 h 2 h3 eˆ1 • eˆ2 × eˆ3 !

to the product of the Jacobians of each transformation in the

The Jacobian for the transformation from an orthonormal

product.

coordinate system x ′ j to a curvilinear coordinate system x i is then:

!

J 2 = J1 J !

(4.13-5)

(4.13-9)

186

J = h1 h 2 h3 !

! If x

i

(4.13-10)

is also an orthonormal coordinate system, equation

(4.13-10) becomes:

J = 1!

!

! !

J ≠ 0!

(4.14-2)

If the correspondence between the two coordinate systems

x i and x′ j is one-to-one, equation (4.14-2) will be true. The (4.13-11)

Jacobian may be zero at certain singular points (but not over

for the transformation between two orthonormal coordinate

any

systems. If x i is a left-handed coordinate system rather than a

transformation for which equation (4.14-2) is valid is referred to

right-handed coordinate system, then from equations (1.21-8) and (4.13-9) we see that equation (4.13-10) becomes: J = − h1 h 2 h3 !

!

(4.13-12)

and equation (4.13-11) becomes:

J = − 1!

!

4.14! !

In order to transform from a curvilinear coordinate system

x′ j to a curvilinear coordinate system x i , the determinant of the inverse matrix ∂x i ∂ x ′ j must be calculated. A necessary and sufficient condition that the determinant of this inverse matrix exist is that: ! or

⎧ ∂ x′ j ⎫ det ⎨ i ⎬ ≠ 0 ! ⎩ ∂x ⎭

(4.14-1)

the

inverse

will

still

exist.

Any

Section 4.1), and so the transformation from the x ′ j to the x i coordinate system is not possible. The Jacobian of the inverse transformation ∂x i ∂ x ′ j is

given by: !

JACOBIANS OF INVERSE TRANSFORMATIONS

and

as a proper transformation. If J = 0 , then from equation ! ! ! (4.13-9), we see that e1 , e2 , and e3 are coplanar vectors (see

! (4.13-13)

volume),

⎧ ∂x i ⎫ J ′ = det ⎨ j ⎬! ⎩ ∂ x′ ⎭

(4.14-3)

The relation between J and J ′ can be determined from: !

∂ x ′ j ∂x k ∂x k = = δ ik ! ∂x i ∂ x ′ j ∂x i

(4.14-4)

!

∂x i ∂ x ′ k ∂ x ′ k = = δ ′j k ! ∂ x ′ j ∂x i ∂ x′ j

(4.14-5)

since the coordinates of any given coordinate system are mutually independent. In determinant form, equation (4.14-4) can be written: 187

J J′ = I !

!

(4.14-6)

where we have used equations (4.14-2), (4.13-3), and (1.18-27),

{ }

I = det δ i j = 1 !

!

J=

!

and where: (4.14-7)

∂x ∂R

∂x ∂θ

∂x ∂φ

∂y ∂R

∂y ∂θ

∂y ∂φ

∂z ∂R

∂z ∂θ

∂z ∂φ

Therefore we have finally: or

1 J′ = ! J

!

(4.14-8) !

Example 4-17

sin θ cosφ

R cosθ cosφ

− R sin θ sin φ

J = sin θ sin φ

R cosθ sin φ

R sin θ cosφ

− Rsin θ

0

cosθ

Compute the Jacobian for the transformation from spherical coordinates ( R, θ , φ ) to rectangular coordinates ( x, y, z ) : !

J=

∂ ( x, y, z )

Therefore

∂ ( R, θ , φ )

!

equation (4.5-3):

The Jacobian is given by:

2

+ ( R ) cos 2θ sin θ sin 2φ + ( R ) sin 3θ cos 2φ 2

2

or

The coordinate transformation equations are given by

x = R sin θ cos φ !

2

!

Solution:

!

J = ( R ) cos 2θ sin θ cos 2φ + ( R ) sin 3θ sin 2φ

!

(

)

(

J = ( R ) cos 2θ sin θ cos 2φ + sin 2φ + ( R ) sin 3θ sin 2φ + cos 2φ 2

2

)

and so:

y = R sin θ sin φ !

z = R cosθ

!

(

)

J = ( R ) sin θ cos 2θ + sin 2θ = ( R ) sin θ 2

2

188

!

Example 4-18

∂x k ! ∂ x′ j

Fjk = J

(4.15-2)

Compute the Jacobian for the inverse transformation from

The partial derivative ∂J ∂x i of the Jacobian can now be

rectangular coordinates

determined. From equation (4.13-2), we have:

( R, θ , φ ) : ∂ ( R, θ , φ ) J′ = ∂ ( x, y, z )

!

( x, y, z )

to spherical coordinates

!

From equation (4.14-8) and Example 4-17, we have: J′ =

4.15! !

∂ ( R, θ , φ ) ∂( x, y, z )

=

!

1 1 1 = = J ∂ ( x, y, z ) ( R )2 sin θ ∂ ( R, θ , φ )

DERIVATIVES OF JACOBIANS

!

Since the Jacobian J is defined to be a determinant, it can

∂ x ′ j ∂x k in the Jacobian J be Fj k , we can rewrite equation (4.13-2) as:

or

(4.15-3)

∂ x′ j k Fm = J δ j m ! ∂x k

⎡ ⎧ ∂ x′ j ⎫ ⎤ ⎡ ∂ x′ j ⎤ ∂ ⎢ det ⎨ i ⎬ ⎥ ∂ ⎢ k ⎥ ∂J ⎣ ⎩ ∂x ⎭ ⎦ ⎣ ∂x ⎦ ! = ∂x i ∂x i ⎡ ∂ x′ j ⎤ ∂⎢ k ⎥ ⎣ ∂x ⎦

(no sum)! (4.15-4)

and so we obtain:

always be written in terms of cofactors. Letting the cofactor of

!

(no sum)!

or

Solution:

!

⎡ ⎧ ∂ x′ j ⎫ ⎤ ∂ ⎢ det ⎨ i ⎬ ⎥ ∂J ⎣ ⎩ ∂x ⎭ ⎦ ! = ∂x i ∂x i

(4.15-1)

∂J ∂J ∂ 2 x′ j ! = i k ∂x i ⎡ ∂ x ′ j ⎤ ∂x ∂x ∂⎢ k ⎥ ⎣ ∂x ⎦

(no sum)!

(4.15-5)

Since Fj k is the cofactor of ∂ x ′ j ∂x k , it does not contain the factor ∂ x ′ j ∂x k . Therefore from equation (4.15-1) we have: !

∂J ⎡ ∂ x′ ⎤ ∂⎢ k ⎥ ⎣ ∂x ⎦ j

δ j m = Fmk !

(4.15-6)

189

and so equation (4.15-5) becomes: !

∂J ∂x i

= Fj k

∂ 2 x′ j ! ∂x i ∂x k

(4.15-7)

Finally, using equation (4.15-2) we obtain: !

∂J ∂x i

=

∂ 2 x ′ j ∂x k J! ∂x i ∂x k ∂ x ′ j

(4.15-8)

190

Chapter 5 Tensor Algebraic Operations

2! Tj′1 ij12i!

∂ x ′ i1 ∂ x ′ i2 ∂x q1 ∂x q2 p1 p2 ! = p1 p2 ! j1 j2 ! Tq1 q2 ! ∂x ∂x ∂ x′ ∂ x′

191

!

( 3)N ( 3)0 = 1

In 3-dimensional space, a tensor of rank N has

components. A tensor of rank zero, therefore, has

for representing physical field quantities since the properties of such quantities are also independent of any coordinate system.

component and is a scalar that can represent a field attribute

!

with no direction associated with it. A tensor of rank one has

components in transforming from one coordinate system to

( 3)

1

Tensors are defined in terms of the law followed by their

= 3 components and is a point vector that can represent a

another coordinate system. We have already seen in Chapter 1

field attribute having one direction associated with it. A tensor

that this is exactly how point vectors are defined (and point

of rank two has ( 3) = 9 components and can represent a field

vectors are tensors of rank one).

attribute having two directions associated with it. Similarly, a

!

tensor of rank N has ( 3) components and can represent a field

system, the components of all tensors (except scalars) must be

attribute having N directions associated with it. Tensors

defined in terms of some coordinate system. Once the

provide a very elegant conciseness of notation, particularly

components of a tensor are specified for a particular coordinate

when representing field quantities having two or more

system, they are uniquely determined for any other coordinate

directions

system by using the transformation law for tensor components.

2

N

associated

with

them,

thus

requiring

many

components for their specification.

!

Although tensors exist independently of any coordinate

Like other tensors, tensors of rank zero (scalars) are

invariant under coordinate transformation. Unlike other

5.1! !

TENSOR TRANSFORMATION LAWS All tensors are independent of any coordinate system. In

tensors, scalars have only one component. Therefore this one component

must

also

be

invariant

under

coordinate

fact, this is the most important property of tensors. The

transformation. The law for coordinate transformation of a

components of tensors transform from one coordinate system

scalar β (and so of the single scalar component) is given by

to another in such a way so as to leave tensors invariant to

equation (1.13-25):

coordinate system transformation. A tensor equation that is

!

valid in one coordinate system remains valid, therefore, after transformation to another coordinate system. Tensors are ideal

!

β = β′!

(5.1-1)

Higher rank tensors can be used to represent the linear

relation between tensors of lower rank. For example, the stress 192

in an elastic body can be represented by a rank two tensor that relates force (a point vector) to the orientation of an infinitesimally small plane area element (a point vector). Tensors of rank two then provide a method of representing physical quantities that have associated with them two directions. Similarly, elasticity can be represented by a rank four tensor that relates stress (a rank two tensor) to strain (a rank two tensor). !

As noted previously, tensors of rank one (point vectors)

are designated by a letter with an arrow over it (for example ! T ). Tensors of rank two and higher will be similarly designated ! by a letter with two arrows over it (for example T ). ! ! A tensor T of rank one (point vector) can have two different types of components, contravariant and covariant (see

!

T ′i =

∂ x′i j T ! ∂x j

∂x j Ti′ = Tj ! ∂ x′i

!

For

a

tensor

Ti′j =

for contravariant components is: !

∂ x′i ∂ x′ j ∂ x′ k p q r! T ′! = p q r ! T! ! ∂x## ∂x ∂x $ N indices N indices ! #"### i j k!

(5.1-6)

N coefficients

and for covariant components is: !

∂x p ∂x q ∂x r T i′j k! = ! Tp q r ! ! ! ! ∂ x## ∂ x′ k $ ! ′ i ∂ x" ′ j ## N indices N indices

(5.1-7)

N coefficients

Example 5-1 Let T p q = V p W q where V p are contravariant components of a ! vector V , and W q are contravariant components of a vector ! W . What is the transformation law for T p q ?

(5.1-2)

Solution:

(5.1-3) ! T

(5.1-4)

∂x p ∂x q Tp q ! (5.1-5) ∂ x′i ∂ x′ j ! For a tensor T of rank N , the coordinate transformation law

!

! and for the covariant components Tj of a point vector T is: !

∂ x′i ∂ x′ j p q T ! ∂x p ∂x q

and for covariant components Tp q is:

Section 1.15). The coordinate transformation law for the ! contravariant components T j of a point vector T is: !

T ′ ij =

From equation (5.1-2) we have: of

rank

two,

the

transformation law for contravariant components T

coordinate pq

is:

!

V′i =

∂ x′i p V ! ∂x p

W′j =

∂ x′ j q W ∂x q 193

and so: T ′i j = V ′i W ′j =

!

The general law for transforming mixed components of a ! tensor T of rank N can be written as:

∂ x′i p ∂ x′ j q ∂ x′i ∂ x′ j p q V W = p V W ∂x p ∂x q ∂x ∂x q

!

or !

T ′i j =

and so T

∂ x′i ∂ x′ j p q T ∂x p ∂x q pq

!

= V W transforms as and therefore is a rank two p

q

tensor as shown in equation (5.1-4). !

The transformation laws given in equations (5.1-6) and

that is, each component of the tensor in one coordinate system is a linear combination of all the components of the same tensor

components. For tensors of rank two and greater, it is possible to have

some components that transform as contravariant components and some components that transform as covariant components. Such tensor components are designated as mixed components. For example, the transformation law for mixed components of ! a rank two tensor T is: !

For mixed components, the order of occurrence of the

indicator. As an example, for the mixed components T•ji the dot the second index is the covariant index j . !

Two different sets of mixed components for a rank two ! tensor T are then given by: !

T ′ •j i =

∂x q ∂ x ′ i • p Tq ! ∂ x ′ j ∂x p

(5.1-10)

!

T ′•i j =

∂ x ′ i ∂x q p T •q ! ∂x p ∂ x ′ j

(5.1-11)

not be linear, however, depending on the coefficients of the

∂ x ′ i ∂x q p Tj′ = p Tq ! ∂x ∂ x ′ j

(5.1-9)

indices can be clarified, if necessary, by using a dot as a place

in the other coordinate system. The transformation itself may

i

∂ x ′ i1 ∂ x ′ i2 ∂x q1 ∂x q2 2! ! ! ! Tqp11q2p! ∂x p1 ∂x p2 ∂ x ′ j1 ∂ x ′ j2

indicates that the first index is the contravariant index i while

(5.1-7) are sets of linear relations among tensor components;

!

i2 ! Tj1′ ij12 ! =

(5.1-8)

!

All tensors (except scalars) are characterized by having

components that are related to only a single coordinate system at a time (a scalar’s single component is independent of any coordinate system). The transformation coefficient ∂x p ∂ x ′ i relating tensor components in two different coordinate systems is, therefore, not a tensor component. This is true even though

∂x p ∂ x ′ i and the components of rank two tensors both require 194

two indices. Furthermore, no tensor component can be a

must occur once, and only once, in each and every term of a

function of the coordinates of more than the single point

tensor equation.

particle for which it is defined. !

Example 5-2

The components of tensors are classified based upon their

Let ap q be the covariant components of a rank two tensor

type or variant: contravariant, covariant, or mixed. It is common practice for tensor components of any given type to

with respect to the rectangular coordinate system ( x1, x2 , x3 )

be referred to as tensors of that type rather than as tensor

where:

components. This is particularly the case when working with tensors of rank two and higher. While we will follow this

!

{a } pq

practice, it is important to keep in mind that such tensor quantities are, in fact, tensor components and not tensors.

⎡ 1 0 2 ⎤ =⎢ 3 2 1 ⎥ ⎢ ⎥ ⎢⎣ 0 2 2 ⎥⎦

For a coordinate system defined by: ! ! e1′ = iˆ = eˆ1!

( x′ , x′ , x′ ) 1

2

! e2′ = 2 ˆj = 2 eˆ2 !

3

having base vectors

! e3′ = 2 ˆj + kˆ = 2 eˆ2 + eˆ3

determine the components ai′j . Solution: We are given:

Table 5-1! Examples of tensor classification. !

All tensors are classified according to rank. The rank of a

tensor is equal to the total number of its free indices, including

!

⎡ ⎢ ⎢ ⎢ ⎣

! e1′ ! e2′ ! e3′

⎤ ⎡ ⎤ ⎡ eˆ1 ⎤ ⎡ 1 0 0 ⎤ ⎡ iˆ ⎤ ⎥ ⎥ ⎢ ⎥ ⎢ 1 0 0 ⎥⎢ ⎥⎢ ⎥ = ⎢ 0 2 0 ⎥ ⎢ eˆ2 ⎥ = ⎢ 0 2 0 ⎥ ⎢ ˆj ⎥ ⎥ ⎢ 0 2 1 ⎥ ⎢ eˆ ⎥ ⎢ 0 2 1 ⎥ ⎢ ˆ ⎥ ⎦ ⎢⎣ k ⎥⎦ ⎦⎣ 3 ⎦ ⎣ ⎦ ⎣

Therefore from equation (4.8-10), we have:

both subscripts and superscripts (see table 5-1). Any free index 195

!

∂x1 = 1! ∂ x ′1

∂x2 = 0! ∂ x ′1

∂x3 =0 ∂ x ′1

!

∂x1 = 0! ∂ x′ 2

∂x2 = 2! ∂ x′ 2

∂x3 =0 ∂ x′ 2

!

∂x1 = 0! ∂ x′ 3

∂x2 = 2! ∂ x′ 3

∂x3 =1 ∂ x′ 3

!

Write the transformation equation for the tensor Ti jk . Solution:

obtain:

a11 ′ =

!

∂x1 ∂x1 ∂x1 ∂x2 ∂x1 ∂x3 a13 1 1 a11 + 1 1 a12 + ∂ x′ ∂ x′ ∂ x′ ∂ x′ ∂ x ′1 ∂ x ′1 ∂x ∂x1 ∂x2 ∂x2 ∂x2 ∂x3 + 21 a23 1 a21 + 1 1 a22 + ∂ x′ ∂ x′ ∂ x′ ∂ x′ ∂ x ′1 ∂ x ′1 +

!

∂x3 ∂x1 ∂x3 ∂x2 ∂x3 ∂x3 a33 1 1 a31 + 1 1 a32 + ∂ x′ ∂ x′ ∂ x′ ∂ x′ ∂ x ′1 ∂ x ′1

or !

a11 ′ = (1) (1) (1) + (1) ( 0 ) ( 0 ) + (1) ( 0 ) ( 2 ) + ( 0 ) (1) ( 3) + ( 0 ) ( 0 ) ( 2 ) + ( 0 ) ( 0 ) (1) + ( 0 ) (1) ( 0 ) + ( 0 ) ( 0 ) ( 2 ) + ( 0 ) ( 0 ) ( 2 )

! and so: !

a11 ′ =1

Following a similar procedure for the other components, we have:

{ }

Example 5-3

Using the transformation law given in equation (5.1-5), we

!

⎡ 1 4 2 ⎤ ai′j = ⎢ 6 28 6 ⎥ ⎢ ⎥ ⎢⎣ 0 8 2 ⎥⎦

From equation (5.1-9) the transformation equation for Ti jk is: ∂x i ∂x j ∂ x ′ r k Ti j ∂ x ′ p ∂ x ′ q ∂x k

!

T p′ qr =

!

By examining the coordinate transformation law followed

by an entity, it is possible to determine if the entity is a tensor component. Moreover, if the number and type of coefficients in the transformation law that is followed by the components of a tensor are known, then the rank of the tensor and the type of components can also be determined. !

The directions associated with higher order tensors can be

seen by writing the tensor using the tensor product or outer product notation ⊗ : ! ! ! ! ! T = Tj1i1j2i2"" e i1 ⊗ e i2" ⊗ ej1 ⊗ ej2" ! !

(5.1-12)

where: 196

! ! ! ! e i1 ⊗ e i2 ≠ e i2 ⊗ e i1 !

!

Solution:

(5.1-13)

The notation ⊗ does not indicate a cross product. The number

We are given:

of base vectors is equal to the rank of the tensor.

5.2! !

TRANSFORMATION LAWS IN MATRIX FORM

{ }

laws given in equations (5.1-4), (5.1-5), (5.1-10), and (5.1-11) as: !

{T ′ }

!

{T ′ } = ⎨ ∂∂xx′ ⎬ {T } ⎨ ∂∂xx′

{ }

!

{ }

i ⎧ ∂x q ⎫ • p ⎧ ∂ x′ ⎫ =⎨ ⎨ p⎬ ! j ⎬ Tq ⎩ ∂ x′ ⎭ ⎩ ∂x ⎭

!

{ }

⎧ ∂ x ′ i ⎫ p ⎧ ∂x q ⎫ ! = ⎨ p ⎬ T •q ⎨ j ⎬ ∂x ∂ x ′ ⎩ ⎭ ⎩ ⎭



ij



T ′j•i

T ′•ij

i





pq



T

{



}

{ }

!

⎫ ! j ⎬ ⎭

q

⎡ ⎤⎡ ⎤⎡ ⎤ ⎧⎪ ∂xq ⎫⎪ ⎢ 1 0 0 ⎥ ⎢ 1 0 2 ⎥ ⎢ 1 0 0 ⎥ ⎨ ⎬ =⎢ 0 2 2 ⎥⎢ 3 2 1 ⎥⎢ 0 2 0 ⎥ ⎩⎪ ∂ xj′ ⎭⎪ ⎢ 0 0 1 ⎥ ⎢ 0 2 2 ⎥ ⎢ 0 2 1 ⎥ ⎣ ⎦⎣ ⎦⎣ ⎦

or

(5.2-1)

(5.2-2)

⎡ 1 0 0 ⎤⎡ 1 4 2 ⎤ ⎡ 1 4 2 ⎤ ai′j = ⎢ 0 2 2 ⎥ ⎢ 3 6 1 ⎥ = ⎢ 6 28 6 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢⎣ 0 0 1 ⎦⎥ ⎢⎣ 0 8 2 ⎦⎥ ⎢⎣ 0 8 2 ⎦⎥

{ }

{ } in Example 5-2.

which is identical to the ai′j

T

(5.2-3)

5.3! (5.2-4)

where the superscript T indicates the transpose of a matrix. Example 5-4

{ }

{ }

T

⎧ ∂ x′i ⎫ ⎧ ∂ x′ j ⎫ = ⎨ p ⎬ T pq ⎨ q ⎬ ! ⎩ ∂x ⎭ ⎩ ∂x ⎭ T

T

⎧ ∂x p ⎫ ai′j = ⎨ ⎬ ap q ⎩ ∂ xi′ ⎭

matrix form. For example, we can write the transformation

p

pq

⎡ ⎤ ⎧ ∂x p ⎫ ⎧⎪ ∂xq ⎫⎪ ⎢ 1 0 0 ⎥ ⎨ ⎬= ⎨ ⎬= ⎢ 0 2 0 ⎥ ∂ x ∂ x ′ ′ ⎩ i ⎭ ⎩⎪ j ⎭⎪ ⎢ 0 2 1 ⎥ ⎣ ⎦

Using equation (5.2-2) we can write:

Tensor component transformation laws can be written in

ij

{a }

!

⎡ 1 0 2 ⎤ = ⎢ 3 2 1 ⎥! ⎢ ⎥ ⎢⎣ 0 2 2 ⎥⎦

CARTESIAN TENSORS

If xj and xi′ are both rectangular coordinate systems, we have from equations (1.13-4), (1.13-7), and (1.13-10): !

!

α i j = α j′i =

∂ xi′ ∂xj

=

∂xj ∂ xi′

!

(5.3-1)

Determine ai′j in Example 5-2 using matrix methods. 197

where α i j are the direction cosines between the xj and xi′ ! coordinate axes. For a rank N tensor T in a rectangular

5.4!

coordinate system, we can then rewrite equations (5.1-6) and

!

(5.1-7) as:

the Kronecker delta, it is possible to show that the Kronecker

!

i j k! p q r! p q r! T ′! = αi p αj q α k r ! T ! = α ′pi α q′ j α r′ k ! T ! ! (5.3-2) #"## $ N indices !# #"## $ N indices N indices !# N direction cosines

!

N direction cosines

T i′j k! = α ′p i α q′ j α r′ k ! Tp q r! = α i p α j q α k r ! Tp q r! ! ! !# #"## $ ! !# #"## $ ! N indices

N direction cosines

N indices

N direction cosines

(5.3-3)

tensor

components

By determining the transformation law that is followed by

delta is a tensor and to find its rank and type. We will begin with the relation given in equation (4.14-5): !

δ j′k =

N indices

We see from equations (5.3-2) and (5.3-3) that covariant and contravariant

KRONECKER DELTA

follow

the

same

∂ x ′ k ∂x p ∂ x ′ k ! = ∂ x ′ j ∂ x ′ j ∂x p

(5.4-1)

This is equivalent to: !

δ j′k =

transformation law in a rectangular coordinate system.

∂ x ′ k ∂x q ∂x q ∂ x ′ k δ = δ pq! pq ∂ x ′ j ∂x p ∂x p ∂ x ′ j

(5.4-2)

Therefore, in this coordinate system, no difference exists

Comparing equation (5.4-2) with equation (5.1-8), we see that

between covariant and contravariant components for tensors of

the Kronecker delta transforms as and therefore is a mixed

any rank. This was noted previously in Section 1.15 for tensors

tensor of rank two. Equation (5.4-2) should then be written in

of rank one (point vectors).

the form:

!

All tensors with components defined in a rectangular

coordinate system are generally referred to as Cartesian tensors, rather than as covariant, contravariant, or mixed tensors. Obviously, this terminology really refers to the tensor components and not the tensors themselves, since tensors are invariant under coordinate system transformation. The index notation of covariant tensors is commonly used for Cartesian

!

δ ′j k =

∂ x ′ k ∂x q p ∂ x ′ k ∂x p ∂ x′ k δ = = ! q ∂x p ∂ x ′ j ∂x p ∂ x ′ j ∂ x′ j

(5.4-3)

The value of the Kronecker delta is independent of the index notation used: !

⎧ 0 ⎪ δ qp = δ pq = δ p q = δ p q = ⎨ ⎪⎩ 1

( p ≠ q) (p = q)

!

(5.4-4)

tensors. 198

Nevertheless, the notation δ p q and δ p q should only be used for

!

orthonormal coordinate systems where there is no difference

invariant to coordinate transformation and so all scalars are

between covariant and contravariant tensors. Equations (4.14-4)

isotropic tensors. The only isotropic tensor of rank one is the

and (4.14-5) should also be written in terms of mixed tensors:

zero vector. As will be shown in Section 9.12, all isotropic

!

! !

δ jk

∂x k = j! ∂x

δ ′j k =

(5.4-5)

∂ x′ k ! ∂ x′ j

(5.4-6)

tensors of rank two are multiples of δ jk :

Example 5-5 If Tp q follows the transformation law: !

the same coordinate system. !

{δ ′ } j

∂x k ∂x p ∂x s ∂x q T ∂x s ∂ x ′ i ∂x j ∂ x ′ k p q

Solution:

coordinate systems: !

Ti′j =

show that Tp q is a tensor of rank two.

The Kronecker delta has the same components in all

k

(5.4-8)

where λ is a scalar that can be zero.

with respect to a single coordinate system. The expressions consistent with this requirement, relating coordinates within

T ji = λ δ ji !

!

As noted previously, tensor components are defined only

given in equations (5.4-5) and (5.4-6) for the Kronecker delta are

All scalars (tensors of rank zero) have components that are

⎡ 1 0 0 ⎤ = ⎢ 0 1 0 ⎥ = δ jk ! ⎢ ⎥ ⎢⎣ 0 0 1 ⎦⎥

{ }

Eliminating x s , we have: (5.4-7)

!

termed isotropic tensors. The Kronecker delta is sometimes

∂x k ∂x p ∂x s ∂x q ∂x k ∂x p ∂xq T = T ∂x s ∂ x ′ i ∂x j ∂ x ′ k p q ∂x j ∂ x ′ i ∂ x ′ k p q

Using equation (5.4-5), we have:

Tensors having this property whereby the components are unchanged by a transformation of the coordinate system are

Ti′j =

!

Ti′j = δ jk

∂x p ∂x q ∂x p ∂x q T = Tp q pq ∂ x′i ∂ x′ k ∂ x′i ∂ x′ j

also referred to as the unit tensor or as the fundamental tensor

From equation (5.1-5), we see that Tp q transforms as and so

of the second rank.

is a covariant tensor of rank two. 199

5.5! !

Solution:

TENSOR ADDITION AND SUBTRACTION

From equation (5.1-9) the transformation equations for Aikj and Bikj are:

Given two or more tensors of the same kind (having the

same number of covariant indices and the same number of

!

contravariant indices) and attached to the same point, the sum

the same point, their sum C ki j obtained from: !

C ki j = A kij + Bki j !

!

D ki j = A ikj − Bki j !

(

)

A′prq + Bp′ rq =

∂x i ∂x j ∂ x ′ r k k p q k Ai j + Bi j ∂ x ′ ∂ x ′ ∂x

(

)

We can then write: !

Therefore we can write: !

∂x i ∂x j ∂ x ′ r k B ∂ x ′ p ∂ x ′ q ∂x k i j

is, therefore, a mixed tensor of the same kind as Aikj and Bikj .

will also be a mixed tensor. Addition of tensors of the same tensors of the same kind is also a tensor of the same kind.

Bp′ rq =

This equation shows that the sum Aikj + Bikj transforms as and

(5.5-1)

kind is commutative and associative. The difference of two

∂x i ∂x j ∂ x ′ r k A ! ∂ x ′ p ∂ x ′ q ∂x k i j

Adding these two equations:

of their components will also be a tensor of the same kind. For example, if Aikj and Bki j are arbitrary mixed tensors attached to

A′prq =

C ikj = A ikj + Bikj

where Cikj is a tensor. Since A ki j and Bikj are tensors, their sum is valid in all coordinate systems.

(5.5-2)

where D ki j is a mixed tensor. Tensors that are not of the same kind cannot be added to or subtracted from each other.

5.6!

SYMMETRIC AND ANTISYMMETRIC TENSORS

Example 5-6

!

Show that the sum of two mixed tensors Aikj and Bikj

tensor is called an isomer of the original tensor. A tensor is

attached to the same point particle is another mixed tensor

called symmetric with respect to any two contravariant or any

of the same kind.

two covariant indices when the interchange of these indices

A tensor obtained by interchanging the indices of another

leaves all the components of the isomer unchanged from the 200

components of the original tensor. For example, if Tpiqj kr is a

independent of any coordinate system. A tensor that is

tensor (of rank six) and if we have:

symmetric or antisymmetric in the same type of variant in one

Tpiqj kr = Tpjqi rk !

!

(5.6-1)

for all values of the indices, then the tensor

Tpiqj kr

coordinate system will also be similarly symmetric or antisymmetric, respectively, in all other coordinate systems.

is symmetric

Example 5-7

with respect to the indices i and j . If Tpiqj kr = Tqipj kr !

!

Show that a rank two tensor Si j that is symmetric in any

(5.6-2)

curvilinear coordinate system

then the tensor Tpiqjkr is symmetric with respect to the indices p

1

respect to any two contravariant or any two covariant indices

!

(5.6-3)

2

3

rank two tensor given in equation (5.1-5), we have:

the components unchanged. Therefore if =−

will also be

From the transformation law for covariant components of a

of all components of the tensor to change, but otherwise leaves !

3

Solution:

when the interchange of these indices causes the algebraic sign

Tpjqi rk

2

( x′ , x′ , x′ ) .

A tensor is called antisymmetric or skew-symmetric with

Tpiqj kr

1

symmetric in any other curvilinear coordinate system

and q . !

(x , x , x )

!

Spq ′ =

∂x j ∂x i Sj i ∂ x′ p ∂ x′ q

then the tensor Tpiqj kr is antisymmetric with respect to the indices

Changing the order of the coefficients in this equation, and

i and j . ! Tensors that are symmetric or antisymmetric in any two

using Sj i = Si j , we can write:

indices are considered to be symmetric or antisymmetric

!

tensors, respectively. Tensor symmetry or antisymmetry can, in

variant is an intrinsic property of a tensor and so is

∂x i ∂x j Si j ∂ x′ q ∂ x′ p

or, from equation (5.1-5) again:

general, only occur between two indices of the same type of variant. Symmetry or antisymmetry in the same type of

Sp′ q =

!

∂x i ∂x j Spq Si j = Sq′ p ′ = ∂ x′ q ∂ x′ p 201

Therefore a rank two tensor Si j that is symmetric in any

nine. This follows from the fact that, for an antisymmetric

curvilinear coordinate system is symmetric in all curvilinear

tensor Ai j , the requirement that:

coordinate systems.

!

Chance symmetry or antisymmetry that may exist between different types of variants of a mixed tensor in any one coordinate system is not an intrinsic property of the tensor and will generally be lost in transforming to another coordinate

!

!

!

{δ } = {δ } j k

⎡ 1 0 0 ⎤ = ⎢ 0 1 0 ⎥! ⎢ ⎥ ⎢⎣ 0 0 1 ⎥⎦

(5.6-4)

Ai i = 0 !

(no sum)!

(5.6-7)

and so we have: !

k j

(5.6-6)

can only be satisfied if:

system. An exception is the Kronecker delta which is symmetric and isotropic:

Ai j = −Aj i !

!

{ A } = {−A } ij

ji

⎡ 0 A12 A13 ⎢ 0 A23 = ⎢ −A12 ⎢ −A 0 13 −A 23 ⎣

⎤ ⎥ ⎥! ⎥ ⎦

(5.6-8)

Since an antisymmetric tensor of rank two has only three

independent components (in 3-dimensional space), it can be

Symmetry in a tensor causes a reduction in the number of

rank two has six independent components rather than nine. If

represented by a pseudovector (see Section 8.12). We can then ! ! ! ! write the vector product A × B of two point vectors A and B in matrix form by writing the second vector in the product as an

Si j is a symmetric rank two tensor, we have:

antisymmetric tensor of rank two:

independent components. For example, a symmetric tensor of

!

!

{S } = { S } ij

ji

⎡ S11 ⎢ = ⎢ S12 ⎢ S ⎣ 13

S12 S22 S23

S13 ⎤ ⎥ S23 ⎥ ! S33 ⎥ ⎦

(5.6-5)

Antisymmetry in a tensor also causes a reduction in the

!

!

! ! A × B = ⎡ a1 a 2 ⎣

⎡ 0 ⎢ a 3 ⎤ ⎢ b3 ⎦ ⎢ −b 2 ⎣

−b3 0 b1

b2 ⎤ ⎡ eˆ1 ⎤ ⎥ ⎥⎢ −b1 ⎥ ⎢ eˆ2 ⎥ ! 0 ⎥ ⎢⎢ eˆ3 ⎥⎥ ⎦⎣ ⎦

(5.6-9)

Any contravariant or covariant tensor of rank two or

number of independent components. An antisymmetric tensor

higher can be expressed as the sum of a symmetric and an

of rank two has only three independent components instead of

antisymmetric tensor of the same rank and type. For example, 202

let us consider Ti j to be an arbitrary covariant tensor of rank

Example 5-8

two. Let us now decompose Ti j into a symmetric tensor Si j and

If S i j is a symmetric tensor and Ai j is an antisymmetric

an antisymmetric tensor Ai j by defining: !

(

)

1 Si j ≡ Ti j + Tj i ! 2

tensor, show that S i j Ai j = 0 .

(5.6-10)

Solution:

and

Ai j ≡

(

We have:

)

1 Ti j − Tj i ! 2

!

Ai j = 0 !

(i = j )

Equation (5.6-10) is called the symmetrization of Ti j , and

!

Ai j = −Aj i !

(i≠ j)

equation (5.6-11) is called the antisymmetrization of Ti j since:

!

Sij = S ji

!

!

(5.6-11)

Si j = Sj i !

(5.6-12)

!

Ai j = −Aj i !

!

Ai i = 0 !

(no sum)!

(5.6-14)

(

) (

)

1 1 Si j + Ai j = Ti j + Tj i + Ti j − Tj i = Ti j ! 2 2

+ S 23 A 23 + S 31 A 31 + S 32 A 32 + S 33 ( 0 )

!

(5.6-13)

Adding equations (5.6-10) and (5.6-11), we have:

!

S i j Ai j = S 11 ( 0 ) + S 12 A12 + S 13 A13 + S 21 A21 + S 22 ( 0 )

!

and

!

Therefore:

or !

S i j Ai j = S 12 A12 + S 13 A13 − S 12 A12 + S 23 A23 − S 13 A13 − S 23 A23

and so: (5.6-15)

!

S i j Ai j = 0

The definitions given in equations (5.6-10) and (5.6-11) are

operational, both defining and giving a method of computing the symmetric rank two tensor Si j and the anti-symmetric rank two tensor Ai j from an arbitrary rank two tensor Ti j . 203

5.7! !

DYADIC TENSOR PRODUCTS The dyadic tensor product of two vectors attached to the

same point can be written using the tensor product notation ⊗ : ! ! ! ! T = V ⊗W ! (5.7-1) Equation (5.7-1) can represent any one of the four possibilities: ! ! ! ! ! ! ! ! T = Vi e i ⊗Wj e j = Vi Wj e i ⊗ e j = Ti j e i ⊗ e j ! (5.7-2) !

! ! ! ! ! ! ! T = V i ei ⊗W j ej = V i W j ei ⊗ ej = T i j ei ⊗ ej !

!

! ! ! ! ! ! ! T = V i ei ⊗Wj e j = V i Wj ei ⊗ e j = T•ji ei ⊗ e j !

(5.7-4)

!

! ! ! ! ! ! ! T = Vi e i ⊗W j ej = Vi W j e i ⊗ ej = Ti•j e i ⊗ ej !

(5.7-5)

(5.7-3)

! ! The product e i ⊗ ei is called the unit dyad (see Example 5-12), and represents two coordinate directions. A linear combination of dyads is a rank two dyadic tensor. A rank two tensor can be expressed in terms of nine dyads (see Example 5-10). !

We see that the dyadic tensor product is neither a scalar

!

!

!

! !

!

!

( A ⊗ B) • C = A ( B • C ) !

(5.7-6)

!

! ! ! ! ! ! C • ( A ⊗ B ) = (C • A ) B !

(5.7-7)

We have then: ! ! ! ! ! ! ! ( A ⊗ B ) • C ≠ C • ( A ⊗ B )!

(5.7-8)

The scalar product of a vector with a dyad is not commutative. ! The scalar product of any vector C with a dyad will generally produce a new vector having a magnitude and direction ! different from C . ! !

The dyadic tensor product is associative and distributive: ! ! ! ! ! ! A ⊗ ( B ⊗ C ) = ( A ⊗ B) ⊗ C ! (5.7-9)

!

! ! ! ! ! ! ! A ⊗(B + C) = A ⊗ B + A ⊗ C !

!

( A + B) ⊗ C = A ⊗ C + B ⊗ C !

!

!

!

!

!

!

!

(5.7-10) (5.7-11)

The dyadic tensor product is generally not commutative: ! ! ! ! ! A⊗ B ≠ B⊗ A! (5.7-12)

(dot) product nor a vector (cross) product. The dyadic tensor ! ! ! T = V ⊗ W has nine components.

The scalar product of two dyads is not commutative (see

!

when a dyad operates on a vector does it become meaningful

The result of a scalar product of a vector with a dyad can

be illustrated by considering the following examples:

Example 5-9). The scalar product of two dyads is a dyad. Only (see Example 5-11). 204

Solution: ! ! ! ! ! ! ! ! ! ! T • ek = Tij e i ⊗ e j • ek = Tij e i e j • ek = Tij e i δ kj = Tik e i !

Example 5-9

(

Determine: ! ! ! ! ! ( A ⊗ B ) • (C ⊗ D ) ! ! ! ! ! (C ⊗ D ) • ( A ⊗ B )

!

!

!

!

!

!

!

Example 5-10 ! ! ! ! Given T = Vi W j e i ⊗ e j , let Ti j = Vi W j and expand T . Solution: ! ! ! ! ! ! ! ! T = T11 e 1 ⊗ e 1 + T12 e 1 ⊗ e 2 + T13 e 1 ⊗ e 3 ! ! ! ! ! ! ! + T21 e 2 ⊗ e 1 + T22 e 2 ⊗ e 2 + T23 e 2 ⊗ e 3 ! ! ! ! ! ! ! + T31 e 3 ⊗ e 1 + T32 e 3 ⊗ e 2 + T33 e 3 ⊗ e 3 There are two directions associated with each of the nine ! components of the rank two tensor T . Example 5-11 ! ! ! ! ! If T = Tij e i ⊗ e j , determine T • ek .

(

)

)

Solution:

!

(C ⊗ D ) • ( A ⊗ B ) = ( D • A )(C ⊗ B )

(

Example 5-12 ! ! ! ! ! Show that if A = a1 e1 + a 2 e2 + a 3 e3 = a i ei then: ! ! ! ! ! A • e i ⊗ ei = A

Solution: ! ! ! ! ! ! ! ! ! ( A ⊗ B ) • (C ⊗ D ) = ( B • C )( A ⊗ D ) !

)

We have: ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! A • e i ⊗ ei = A • e i ei = A • e 1 e1 + A • e 2 e2 + A• e 3 e3

(

) (

)

(

) (

)

(

)

Therefore from equation (4.11-4) we have: ! ! ! ! ! ! ! ! ! A • e i ⊗ ei = a1 e1 + a 2 e2 + a 3 e3 = a i ei = A

(

)

! ! and we see why e i ⊗ ei is considered the unit dyad.

5.7.1! !

OUTER PRODUCT

! ! ! From equation (5.7-2) the i j component of T = V ⊗ W is

given by: !

! ! Ti j = (V ⊗ W )i j = Vi Wj !

(5.7-13)

! The components Ti j of T are the product of the components Vi   of V and Wj of W . Products of tensor components such as 205

given in equation (5.7-13) are called outer products. The outer product will have a rank that is the sum of the ranks of the factor tensors. For example, in equation (5.7-13) Vi and Wj are rank one tensors and their outer product is a rank two tensor

Ti j . A component of a dyad of rank two is a rank two tensor. !

Tensors of arbitrary rank and type defined at the same

!

i j •p ij p ! C••k•q = A••k B•q

(5.7-16)

!

• i j •p C k••q = A•k i j Bq• p !

(5.7-17)

The outer product of a symmetric rank two tensor and an antisymmetric rank two tensor is always zero as was shown in

point can be factors in an outer product. The type of tensor

Example 5-8.

resulting from the outer product of two tensors will be

!

covariant, contravariant, or mixed depending on whether the

tensor factors in a product cannot be changed without changing

factor tensors in the outer product are covariant, contravariant,

the resulting tensor. For example:

or one covariant and the other contravariant.

!

!

If Akij and Bqp are any two arbitrary tensors of the indicated

rank, their outer product is given by: !

(5.7-18)

Cki jqp = Aikj Bqp !

Example 5-13

! ! If the covariant components of vectors V and W with respect to a given coordinate system are: ! ! V = 0, 1, 3 ! W = 1, 2, 3 ! ! ! ! find the covariant components of T = V ⊗ W with respect to this same coordinate system.

(5.7-14)

Solution:

Equation (5.7-14) can, of course, represent a number of different tensor equations, depending on the relative position of the indices. As, for example, these tensors: !

Ci j kl = Ai j Bkl ≠ Bkl Ai j = Ckli j !

Multiplication of a tensor by a scalar will produce a tensor

of the same rank since the rank of a scalar is zero. If β is a ! ! scalar and A is a tensor of any rank, the product β A will then ! result in a tensor of the same rank as A , having components ! each of which is β times the corresponding component of A . !

Tensor multiplication is not commutative. The order of

i j ••p ij C••kq = A••k Bq•p !

(5.7-15)

!

{T } = {V W } ij

i

j

⎡ 0 ⎤ ⎡ 0 0 0 ⎤ ⎢ ⎥⎡ =⎢ 1 ⎥ 1 2 3 ⎤=⎢ 1 2 3 ⎥ ⎣ ⎦ ⎢ ⎥ ⎢⎣ 3 ⎥⎦ ⎢⎣ 3 6 9 ⎦⎥ 206

Example 5-14

5.7.2!

If Akij and Bqp are tensors, show that their outer product Cki jqp

!

Einstein summation over that common index will produce a

Cki jqp = Akij Bqp

tensor that is of rank two less than the original tensor. For example, if the index q of the tensor Aipjq is set equal to the

Solution: From equation (5.1-9) the transformation equations for Aikj and Bqp are: A′nl m =

!

∂ x ′ ∂ x ′ ∂x ij ! i j n Ak ∂x ∂x ∂ x ′ k

m

l

Bs′ r =

∂ x ′ ∂x p p s Bq ∂x ∂ x ′ r

q

Multiplying: A′nl m Bs′ r =

!

∂ x ′ ∂ x ′ ∂x ∂ x ′ ∂x Akij Bqp ∂x i ∂x j ∂ x ′ n ∂x p ∂ x ′ s l

m

k

q

r

or ∂ x ′ l ∂ x ′ m ∂x k ∂ x ′ r ∂x ij p = i j n p s Ck q ∂x ∂x ∂ x ′ ∂x ∂ x ′ q

!

Cn′ sl m r

For tensor components having a rank of two or higher, if

we equate one covariant index and one contravariant index,

is a tensor, where: !

CONTRACTION

We see from equation (5.1-9) that C ki jqp transforms as a mixed tensor of rank 5, and so it is a mixed tensor of rank 5.

index j , the two indices j and q become dummy indices, and we have: !

i3 i Aipjj = Ai1p1 + Ai2 p 2 + Ap 3 = B p !

(5.7-19)

The rank four tensor Aipjq is then reduced to Bpi , a tensor of rank two. This reduction in the rank of a tensor is known as contraction. If no free indices remain after contraction, the resulting tensor is a scalar. Repeated contraction of a tensor of rank N will produce a scalar if N is even and a point vector if

N is odd. ! For curvilinear coordinates, contraction is limited to the case where one index is a superscript and the other is a subscript. This limitation is necessary to assure that the result of the contraction is a tensor (see Example 5-16). Only for Cartesian tensors can contraction occur over two subscripts or two superscripts. Contraction of a rank two Cartesian tensor

Ai j results in: 207

Ai i = A11 + A22 + A 33 = β !

!

Solution:

(5.7-20)

From equation (5.1-9) the transformation law for Aikj is:

where β is a scalar that is known as the trace of the tensor Ai j . !

Example 5-15 Show that the contraction of the mixed tensor Aji of rank two produces a scalar.

!

!

∂ x ′ ∂x i i q Aj ∂x ∂ x ′ p

j

!

i and j indices does not yield a tensor. Contracting the tensor Aikj in the i and k indices, we have: !

!

the tensor Aikj in the i and j indices does not result in a tensor. Also show that contracting this tensor in the i and k indices does result in a tensor.

Aikj = Aii j = B j and the transformation equation becomes:

Example 5-16 Show that, in a curvilinear coordinate system, contracting

∂ x ′ l ∂ x ′ m ∂x k = B ∂x i ∂x i ∂ x ′ n k

vector components, and so contracting the tensor Aikj in the

∂ x ′ p ∂x j i ∂x j i j i i p = i p Aj = i A j = δ i A j = Ai = Ap ∂x ∂ x ′ ∂x

And so App is invariant and is therefore a scalar.

A′nl m

This is not the correct transformation law for any kind of

Setting q = p and using equation (5.4-5): A′pp

Aikj = Aiki = Bk and the transformation equation becomes:

From equation (5.1-9) the transformation equation for Aji is: A′qp =

∂ x ′ l ∂ x ′ m ∂x k i j A ∂x i ∂x j ∂ x ′ n k

Contracting the tensor Aikj in the i and j indices, we have:

Solution:

!

A′nl m =

A′nl m =

m ∂ x ′ l ∂ x ′ m ∂x i j ∂ x ′ l ∂ x ′ m j l ∂ x′ B = B = δ Bj ′ n ∂x i ∂x j ∂ x ′ n ∂ x ′ n ∂x j ∂x j

where we have used equation (5.4-6). We now have: !

A′nn m = B′ m =

∂ x′ m j B ∂x j 208

This is the correct transformation law for contravariant components of vectors, and so contracting the tensor

Aikj

in

the i and k indices does yield a tensor. !

the Kronecker delta. For example: Aipjq δ qj = Aipjj = Bpi !

5.7.3! !

!

vector and a covariant vector is known as a scalar product since the result is a scalar.

(5.7-21)

Example 5-17 Show that the inner product of a contravariant vector Ai and a covariant vector Bi is a scalar.

An inner product of two tensor components is obtained

whenever a covariant index of one component and a

Solution:

contravariant index of the other component in an outer product

We have:

are equal. For example: !

Cj =

Aik

Bjk

!

(5.7-22)

!

is an inner product. If a contraction involving two components !

example, if we have the outer product: ij p Ckq = Aikj Bqp !

(5.7-23)

and we set p = k , the outer product becomes the inner product: !

Dqi j = Akij Bqk !

Ai =

∂ x′ i j A′ ! ∂x j

Bi =

∂x j Bj′ ∂ x′ i

Therefore:

follows an outer product, we obtain an inner product. For !

From the previous discussions of outer product and

tensors will be a tensor. The inner product of a contravariant

INNER PRODUCT

i

(5.7-25)

contraction, we see that the results of an inner product of

Contraction of a tensor is equivalent to multiplication by

!

E i = Aikj Bjk !

!

Ai Bi =

∂ x ′ i ∂x j j ∂ x ′ i j ∂x j j A B = ′ ′ j i j j i A′ Bj′ = A′ Bj′ ∂x ∂ x′ ∂x ∂ x ′

and so the inner product of a contravariant vector Ai and a covariant vector Bi is independent of the coordinate system and is a scalar since there are no free indices.

(5.7-24)

If, in addition, we set q = j , we have the inner product: 209

or Example 5-18 Show that the inner product of the tensors Aikj and Bqk is a

!

A′r

∂ x ′ l ∂ x ′ m ∂x ij k Bs′ = i j s A k Bq ∂x ∂x ∂ x ′ q

lm

r

tensor of rank three.

and so the inner product of the tensors Aikj and Bqk is a tensor

Solution:

of rank three.

From equation (5.1-9) the transformation equations for Aikj

5.8!

and Bqk are: A′n

lm

!

∂ x ′ r ∂x q k Bs′ = B ∂x k ∂ x ′ s q

∂ x ′ l ∂ x ′m ∂x k i j = Ak ! ∂x i ∂x j ∂ x ′ n

r

Multiplying: A′n

lm

!

∂ x ′ ∂ x ′ ∂x ∂ x ′ ∂x Bs′ = Akij Bqk ∂x i ∂x j ∂ x ′ n ∂x k ∂ x ′ s r

l

m

r

k

q

A′n

lm

q ∂ x ′ l ∂ x ′ m ∂ x ′ r ∂x ij k Bs′ = i j n s A k Bq ∂x ∂x ∂ x ′ ∂ x ′ r

and so: !

product and inner product result in a tensor. We now consider the inner product: !

Cqi j = Akij Bqk !

(5.8-1)

in which one of the factors, Bqk , is known to be an arbitrary

A′nl m Bs′ r =

∂ x′ ∂ x′ r ∂x Akij Bqk i j δn ∂x ∂x ∂ x′ s q

m

A′r

factor, Aikj , must then also be a tensor. More generally, for an inner or outer product consisting of two factors, if one factor of

∂ x ′ l ∂ x ′ m ∂x ij k Bs′ = i j s A k Bq ∂x ∂x ∂ x ′ r

type (that is independent of the other factor), and if the result of the product is known to be a tensor, then the other factor in the product can be assumed correctly to be a tensor. This is known

q

lm

Since tensor operations are involved, we can state that the other

the product is known to be an arbitrary tensor of any rank and l

Therefore: !

We have just seen that the tensor operations of outer

tensor and where the product, Cqi j , is known to be a tensor.

or !

!

QUOTIENT LAW

as the quotient law. The quotient law can be used to determine not only if a factor in a product is a tensor, but also its rank and 210

Two of the coefficients can be rewritten as:

type. The quotient law cannot be used if one of the factors of the inner product is zero.

∂x q ∂ x ′ s ∂ x ′ s s r q = r = δ r′ ∂ x ′ ∂x ∂ x′

!

Example 5-19

where we have used equation (5.4-6). Therefore we have:

Show the validity of the quotient law for Cqi j = Aikj Bqk if Bqk is known to be an arbitrary tensor and Cqi j is known to be a tensor.

To show that the Aikj in this equation is indeed a tensor, we

!

begin by transforming the tensor Cqi j : !

Cr′

!

Cr′

= A′n

lm

∂ x ′ l ∂ x ′ m ∂x q i j k Br′ = A B ∂x i ∂x j ∂ x ′ r k q n

Using the transformation law for Bqk , we obtain: !

independent of Aikj , this equation can be true only if:

∂ x ′ l ∂ x ′ m ∂x ij = i j r Cq ∂x ∂x ∂ x ′

or since Cqi j = Aikj Bqk :

Bqk =

∂x ∂ x ′ n n q Bs′ ∂ x ′ ∂x k

s

⎡ l m ∂ x ′ l ∂ x ′ m ∂x k i j ⎤ n ⎢ A′n − ∂x i j n A k ⎥ Br′ = 0 ∂x ∂ x ′ ⎣ ⎦ Since Br′ n is known to be an arbitrary nonzero tensor that is

q

lm

∂ x ′ l ∂ x ′ m ∂x k s i j n ∂ x ′ l ∂ x ′ m ∂x k i j n δ ′ A B′ = i A B′ ∂x i ∂x j ∂ x ′ n r k s ∂x ∂x j ∂ x ′ n k r

or

Solution:

lm

A′n l m Br′ n =

!

!

A′n

lm

∂ x ′ l ∂ x ′ m ∂x k i j = A ∂x i ∂x j ∂ x ′ n k

From equation (5.1-9), we see then that Aikj is a mixed tensor of rank three. The quotient law is valid, therefore, for Cqi j = Aikj Bqk when Bqk is known to be an arbitrary tensor and Cqi j is known to be a tensor.

and so we have: !

A′n

lm

∂ x ′ l ∂ x ′ m ∂x q ∂x k ∂ x ′ s i j n Br′ = A B′ ∂x i ∂x j ∂ x ′ r ∂ x ′ n ∂x q k s n

211

Chapter 6 Metric Tensors

i j ds = g dx dx ( ) ij 2

212

!

The metric tensor is a very important tensor that provides

the information necessary for converting coordinate system differentials

into

invariant

distance

differentials.

This

information is provided by the components of the metric tensor (since the tensor itself is independent of any coordinate system).

METRIC COEFFICIENTS AND METRIC TENSOR The distance differential ds is a physical quantity that is

independent of any coordinate system used to measure it. The square of ds can be obtained from equation (2.4-3): !

(6.1-5)

where gi j are called metric coefficients because they determine the distance measure or metric for the coordinate system being used. !

Since the physical distance between two points in space is

invariant, coordinate system differentials must be changed into

6.1! !

( ds )2 = gi j d x i d x j !

!

( ds ) = d r! 2 = d r! • d r! ! 2

invariant distance differentials. For this reason the quadratic differential equation (6.1-5) is known as the first fundamental form, fundamental metric form, or metric form of the coordinate system for which it pertains. Because the scalar product of any two vectors is commutative, we have from equation (6.1-4):

(6.1-1)

!

gi j = gj i !

(6.1-6)

Equation (6.1-1) is valid for all coordinate systems. From

and so the metric coefficients gi j are symmetric.

equation (4.8-1), we have for a curvilinear coordinate system:

!

!

( ds )2 = d r! • d r! = d x i e!i • d x j e!j !

(6.1-2)

( ds )2 = e!i • e!j d x i d x j !

(6.1-3)

Defining: !

we can write:

(i ≠ j ) !

(6.1-7)

coordinate system is orthogonal. ! !

! ! gi j = ei • ej !

gi j = 0 !

then, from equations (6.1-4) and (4.6-2), we can see that the

or, rearranging terms: !

!

If for a given coordinate system, we have:

(6.1-4)

If for a given coordinate system, we have:

gi j = δ i j !

(6.1-8)

then, from equations (6.1-4) and (4.6-3), we can see that the ! coordinate system is orthonormal and that the ei base vectors 213

are unit base vectors eˆi . Equation (6.1-8) therefore applies for

Comparing equations (6.1-13) and (5.1-5), we see that the

the rectangular coordinate system. Using equation (6.1-8), we

metric coefficients gi j transform as and so are covariant tensor

can rewrite equation (6.1-5) as:

components of rank two. The covariant tensor gi j is known as

( ds )2 = δ i j d x i d x j !

!

(6.1-9)

for an orthonormal coordinate system. We can compare

the metric tensor or fundamental tensor. From equation (6.1-6), we see that the metric tensor is symmetric. !

Equation (6.1-5) can be written in matrix form:

equation (6.1-9) with the expression given in equation (1.14-11) for the square of ds with respect to the rectangular coordinate system: ! !

( ds )2 = δ j k dxj dxk !

(6.1-10)

The tensor character of the metric coefficients gi j can be

examined by using equations (6.1-4) and (4.8-9) to write: ! ! ∂ x′ ! ∂ x′ ! gi j = ei • ej = e′p • eq′ ! ∂x i ∂x j p

!

q

(6.1-11)

or, rearranging terms: !

gi j =

∂ x′ ∂ x′ ! ! e′p • eq′ ! ∂x i ∂x j p

(6.1-12)

rewrite equation (6.1-12) to obtain the transformation law for ∂ x′ p ∂ x′ q gp′ q ! ∂x i ∂x j

⎤ ⎥ ⎥ ! (6.1-14) ⎥ ⎥⎦

The metric tensor provides the necessary translation of

measurements from one coordinate system to another. It does this by relating an invariant differential distance ds to differentials dx i along the coordinate system curves of each coordinate system.

Determine the transformation equations for the metric

tensor from the rectangular coordinate system ( x1′, x ′2 , x3′ ) to

(

)

a curvilinear coordinate system x1, x 2 , x 3 . Solution:

metric coefficients: gi j =

!

dx 3

g13 ⎤ ⎡ dx1 ⎥⎢ g23 ⎥ ⎢ dx 2 g33 ⎥ ⎢⎢ dx 3 ⎦⎣

Example 6-1

q

Using equation (6.1-4) in the primed coordinate system, we can

!

2 1 2 !( ds ) = ⎡⎢ dx dx ⎣

⎡ g11 g12 ⎤⎢ g g22 ⎢ ⎦⎥ ⎢ 21 g g32 ⎣ 31

From equations (6.1-13) and (6.1-8), we have: (6.1-13) 214

!

gi j =

∂ x′ p ∂ x′ q ∂ x′ p ∂ x′ q ∂ x′ p ∂ x′ p gp′ q = δ p′ q = ∂xi ∂xj ∂xi ∂xj ∂xi ∂xj

!

space curve C in Riemannian coordinate space. From equation (2.10-1) we have:

and so: !

We can use equation (6.1-5) to calculate arc length s on a

∂ x ′1 ∂ x ′1 ∂ x ′ 2 ∂ x ′ 2 ∂ x ′ 3 ∂ x ′ 3 gi j = + + ∂xi ∂xj ∂xi ∂xj ∂xi ∂xj

s=

!

∫ ∫

ds dt ! dt

ds =

(6.1-17)

Using equation (6.1-5) we can write: !

The metric tensor can be used to define coordinate space.

For example, a Riemannian coordinate space

( R)

N

is any

s=

!

coordinate space that has N dimensions and that is specified by the existence of the fundamental metric form given in equation (6.1-5): !

( ds )

2

= gi j dx dx ! i

j

6.2! !

(6.1-15)



gi j d x i d x j =



gi j

d xi d x j dt ! dt dt

(6.1-18)

RECIPROCAL METRIC TENSOR We will now define a new entity g i j that is very important

in tensor analysis. We begin by noting that since the metric

Most physical problems that can be solved using tensors

tensor gi j has a rank of two, it can be represented by a matrix of

relate to objects that can be described in Riemannian coordinate

nine components as given in equation (6.1-14). We will define

space. Euclidean coordinate space is that part of Riemannian

G to be the determinant of this matrix:

coordinate space in which the Cartesian coordinate system is an

!

!

allowable coordinate system. Certain constraints exist on those metric tensors that are metrics of Euclidean coordinate space

Euclidean coordinate space we must have: !

gi j d x i d x j > 0 !

{

}

(6.2-1)

or

(see Section 9.8). The metric tensor is a positive definite symmetric tensor in Euclidean coordinate space. Therefore, in

{ }

! ! G = det gi j = det ei • ej !

!

g11

g12

g13

G = g21

g22

g23

g31

g32

g33

! ! ! ! e1 • e1 e1 • e2 ! ! ! ! = e2 • e1 e2 • e2 ! ! ! ! e3 • e1 e3 • e2

! ! e1 • e3 ! ! e2 • e3 ! ! ! e3 • e3

(6.2-2)

(6.1-16) 215

This determinant is in the form of a Gram determinant as given in equation (1.18-50). Therefore we have using box notation:

! ! ! 2 G = [ e1 e2 e3 ] !

! !

{g } ij

!

(6.2-3)

If we let G i j represent the cofactor of the gi j element in G ,

we can write: G = gi j G !

(no sum on i )!

(6.2-4)

Expand G = det ⎡⎣ gi j ⎤⎦ in terms of cofactors for i = 1 .

Show that gi j g i j = 3 .

Solution:

Solution:

!

g23

g32

g33

!

G 12 = −

g21

g23

g31

g33

!

G 13 =

g21

g22

(6.2-7)

g31

g32

!

gi j g i j = δ ii = 1+ 1+ 1 = 3

!

Since gi j is symmetric, G is also symmetric, and so we can

use equation (6.2-4) to write:

G ij ! G

(6.2-5)

where we are using contravariant indicial notation for g

ij

! in

anticipation of showing that it is a contravariant tensor of rank two. From matrix theory we see that all the g

gi j g p j = δ ip

Letting p = i we have:

We will now define g i j as: gi j =

(6.2-6)

From equation (6.2-6) we have: !

g22

. Therefore we can write:

gi j g i j = 3!

Example 6-3

G 11 =

−1

gi j g p j = δ ip !

Example 6-2

where

!

{ } { }

. We then have g i j = gi j

as is shown in Example 6-3.

G = g11 G 11 + g12 G 12 + g13 G 13

!

−1

We also have: !

ij

!

defined such that they are elements of the inverse matrix

ij

elements are

G ij = G ji !

(6.2-8)

From equations (6.2-8) and (6.2-5), we then have: !

g pj = g j p !

(6.2-9) 216

and so g p j is symmetric. We can rewrite equation (6.2-6) as: gi j g p j = gj i g p j = gj i g j p = gi j g j p = δ ip !

! !

From the quotient law we know that a q will then be a

(6.2-10)

Writing equation (6.2-10) in determinant form, we have:

{

}

{ }

det gi j g p j = det δ ip !

!

(6.2-11)

or from equations (4.14-7) and (5.4-4):

{ } { }

det gi j det g p j = 1!

!

(6.2-12)

finally:

{ }

det g p j

!

g p j aj = g p j gq j a q = δ qp a q !

(6.2-16)

!

a p = g p j aj !

(6.2-17)

Since aj is an arbitrary covariant vector and a p is a

!

contravariant vector, we know from the quotient law that g p j is

(6.2-13)

is symmetric and is called the reciprocal metric tensor or conjugate metric tensor. Equation (6.2-10) can be considered to be the definition of the conjugate metric tensor.

g11

!

using equation (6.2-10), we have:

a contravariant tensor of rank two. The contravariant tensor g p j

1 = ! G

or

!

tensor of rank two. Multiplying equation (6.2-15) by g p j and

or

Using the definition of G given in equation (6.2-1), we have

!

contravariant vector since the metric tensor ga j is a covariant

1 = g 21 G g 31

g12

g13

g 22

g 23 !

g 32

g 33

(6.2-14)

6.3! !

For tensors of rank one or higher, one type of component

can be changed into another type of component by using either is a

the metric tensor or its reciprocal. We have seen examples of

tensor since gi j is not an arbitrary tensor. Instead we will now let aj be an arbitrary covariant vector where:

this conversion for tensors of rank one in equations (6.2-15) and

!

We cannot use equation (6.2-10) to determine if g

pj

ASSOCIATED TENSORS

aj = gq j a q !

(6.2-15)

(6.2-17). As examples of this conversion for tensors of rank two, we now consider an arbitrary covariant tensor ai j and an arbitrary contravariant tensor b i j . We can write: 217

g pi ai j = a •pj !

!

(6.3-1)

then these two associated tensor components will not be identical unless the tensor components a j p are symmetrical.

g q j ai j = a•q i !

!

(6.3-2)

!

Using metric tensors, a dummy index of any term in a

tensor equation can be lowered from a superscript position or g pi g q j ai j = a p q !

!

(6.3-3)

raised from a subscript position. Also a free index in a tensor equation can be lowered or raised in all terms (see Example

gpi gq j b i j = bp q !

!

(6.3-4)

6-4).

pq where a •pj , a•q , and bp q are all tensors and where the i , a

Example 6-4

relations given in equations (6.3-1) through (6.3-4) are all inner

p If we have a •i j = b•i•j •q p c • q show that:

products. !

a.!

Tensors obtained through the inner product process using

a •i j = b•i j p q c p q

either the metric tensor or its reciprocal are called associated

b.! a •i j = b•i•j p q cp q

tensors to the original tensor. Because of the resulting tensor

c.!

component conversion, the inner product process of converting a contravariant index into a covariant index using the metric tensor is known as lowering an index, while the inner product

p a •i j = b•i•j •q c •q p

p d.! ai j = b•••q i j p c•q

e.!

i j •q p a i j = b••p c• q

process of converting a covariant index into a contravariant index using the reciprocal metric tensor is known as raising an

Solution:

index.

Using the metric tensors to lower and raise indices of the

!

If we have two associated tensors a •ij and a •i j such that:

!

a •ij = gi p a j p !

!

a •i j

= gi p a ! pj

(6.3-5) (6.3-6)

given a•i j , we have: a.!

a•i j = g q r b•i j p r gq s c p s = b•i j p r c p s δ sr = b•i j p r c p r = b•i j p q c p q

ps i•r q q b.! a •i j = g pr b•i•rq cs q δ rs = b•i•r cr q = b •i•j p q cp q j g csq = b• j j

218

c.!

pk •s a •i j = g pr g q s b•i•r gq m c •k m = b•i •j •r s c •k m δ rk δ ms = b•i•r j •s g j• s cr p = b•i •j •q c •q p

!

!

p d.! g i r a r j = g i r b•••q r j p c•q

!

ar j =

b•••q rj p

Changing the free index r to i :

!

p ai j = b•••q i j p c• q

!

i r •q b•• p

!

a =

!

Changing the free index r to j :

!

a = ij

i j •q b•• p

c •pq

!

Example 6-5 !

From equation (6.1-4), we have: ! ! ! gr′ s = er′ • es′ where from Example 5-2:

iˆ ⎤ ⎥ ˆj ⎥ ⎥ kˆ ⎥⎦

{ g′ } rs

⎡ ⎢ =⎢ ⎢ ⎣

! ! ! ! e1′ • e1′ e1′ • e2′ ! ! ! ! e2′ • e1′ e2′ • e2′ ! ! ! ! e3′ • e1′ e3′ • e2′

! ! e1′ • e3′ ! ! e2′ • e3′ ! ! e3′ • e3′

⎤ ⎡ g11 ′ ⎥ ⎢ ′ ⎥ = ⎢ g21 ⎥ ⎢ g′ ⎦ ⎣ 31

g12 ′ g22 ′ g32 ′

g13 ′ ⎤ ⎡ 1 0 ⎥ g23 ′ ⎥=⎢ 0 4 ⎢ g33 ′ ⎥ ⎢⎣ 0 4 ⎦

0 4 5

⎤ ⎥ ⎥ ⎥⎦

{ g′ } = { g′ } rs

−1

rs

obtain the components of the reciprocal metric tensor g′ r s . We have: !

Solution:

eˆ1 ⎤ ⎡ 1 0 0 ⎤ ⎡ ⎢ ⎥ eˆ2 ⎥ = ⎢ 0 2 0 ⎥ ⎢ ⎢ ⎥ eˆ3 ⎥ ⎢⎣ 0 2 1 ⎦⎥ ⎢⎢ ⎦ ⎣

the matrix of metric tensor components can be inverted to

c •pq

From the tensor a′i j determined in Example 5-2, find a′ p q .

⎤ ⎡ ⎤⎡ ⎥ ⎢ 1 0 0 ⎥⎢ ⎥=⎢ 0 2 0 ⎥⎢ ⎥ ⎢ 0 2 1 ⎥⎢ ⎦⎣ ⎦ ⎣

Since from Section 6.2 we know that:

p e.! gj r a i r = gj r b•i •r •q p c• q ir

! e1′ ! e′2 ! e′3

Therefore

c •pq

!

⎡ ⎢ ⎢ ⎢ ⎣

{ }

det g′r s = G ′ = 4

{ g′ } rs

⎡ 1 0 0 ⎤ ⎢ ⎥ 5 ⎢ = 0 −1 ⎥ 4 ⎢ ⎥ ⎢ 0 −1 1 ⎥ ⎣ ⎦

Using equation (6.3-3): !

a′ p q = g′ pi g′ q j a′i j

where from Example 5-2: 219

⎡ 1 4 2 ⎤ a′i j = ⎢ 6 28 6 ⎥ ⎢ ⎥ ⎢⎣ 0 8 2 ⎥⎦

a′ p q can also be determined using equation (6.3-3) in matrix

{ }

!

form:

We can then calculate a′ p q . For example:

a′ = g′ g′ a11 ′ + g′ g′ a12 ′ + g′ g′ a13 ′ 23

!

21

31

21

32

21

33

!

+ g′ 22 g′ 31 a′21 + g′ 22 g′ 32 a′22 + g′ 22 g′ 33 a′23

!

+ g′ 23 g′ 31 a′31 + g′ 23 g′ 32 a′32 + g′ 23 g′ 33 a′33 or

!

a′ = ( 0 ) ( 0 ) (1) + ( 0 ) ( −1) ( 4 ) + ( 0 ) (1) ( 2 ) 23

!

⎛ 5⎞ ⎛ 5⎞ ⎛ 5⎞ + ⎜ ⎟ ( 0 ) ( 6 ) + ⎜ ⎟ ( −1) ( 28 ) + ⎜ ⎟ (1) ( 6 ) ⎝ 4⎠ ⎝ 4⎠ ⎝ 4⎠

!

+ ( −1) ( 0 ) ( 0 ) + ( −1) ( −1) ( 8 ) + ( −1) (1) ( 2 ) Therefore

!

a′ 23 = −

43 2

!

RECIPROCAL BASE VECTORS The metric tensor can also be used to lower the index of a

reciprocal base vector. Similarly, the reciprocal metric tensor can ! be used to raise the index of a base vector. If ep is a covariant ! base vector and e q is the reciprocal base vector for a coordinate system, we can use equations (4.7-5) and (5.4-4) to write: ! ! e q • ep = δ pq ! ! (6.4-1)

orthogonal (see Example 6-7). Multiplying equation (6.4-1) by

procedures. We then have: ⎡ ⎢ pq ⎡⎣ a′ ⎤⎦ = ⎢ ⎢ ⎢ ⎢⎣

6.4!

This equation is true whether or not the coordinate system is

The remaining terms can be derived following similar

!

!

⎡ 1 0 0 ⎤ ⎡ 1 0 0 ⎤ ⎢ ⎥⎡ 1 4 2 ⎤⎢ ⎥ 5 5 ⎢ ⎥ ⎥ ⎢ ⎡⎣ a′ p q ⎤⎦ = ⎢ 0 −1 −1 ⎥ 6 28 6 ⎥ 0 4 4 ⎢ ⎥⎢ ⎢ ⎥ ⎢ 0 −1 1 ⎥ ⎢⎣ 0 8 2 ⎥⎦ ⎢ 0 −1 1 ⎥ ⎣ ⎦ ⎣ ⎦

−2 ⎤ ⎥ 15 113 43 ⎥ − 2 4 2 ⎥ ⎥ − 6 −21 16 ⎥ ⎦ 1

3

g r q and using equation (6.1-4): ! ! ! ! ! g rq e q • ep = gr q δ pq = gr p = er • ep !

(6.4-2)

or !

(g

rq

)

! ! ! e q − er • ep = 0 !

(6.4-3)

220

For equation (6.4-3) to be true for all values of the indices, we must have: ! ! ! er = gr q e q !

(6.4-4)

and so the metric tensor grq lowers the index of the reciprocal ! base vector e q . Multiplying equation (6.4-4) by g r p and using equation (6.2-10), we have: ! ! ! ! g r p er = g r p gr q e q = δ qp e q !

!

equivalent to that given in equation (6.1-4) for base vectors.

g11

!

! ! e p = g r p er !

(6.4-6)

It is possible to derive a relation for reciprocal base vectors

!

)

(6.4-8)

g13

g 22

g 23

32

33

g

g

! ! ! ! e1 • e1 e1 • e 2 = e! 2 • e!1 e! 2 • e! 2 ! ! ! ! e 3 • e1 e 3 • e 2

! ! e1 • e 3 ! ! e2 • e3 ! ! ! e3 • e3

(6.4-12)

in equation (1.18-50). Therefore we have using box notation:

or

! ! ! ! e i • e j = g i p g jq ep • eq !

1 = g 21 G g 31

g12

This determinant is in the form of a Gram determinant as given

!

(

(6.4-11)

Equation (6.4-11) is a relation for reciprocal base vectors that is

using the reciprocal metric tensor. We begin by writing: ! ! ! ! ! e i • e j = g i p ep • g jq eq ! (6.4-7)

!

! ! gi j = e i • e j !

Equation (6.2-14) can now be written:

and so the reciprocal metric tensor g r p can be used to raise the ! index of the base vector er . !

(6.4-10)

or

(6.4-5)

or !

and using equation (6.2-10): ! ! ! e i • e j = g i p δ pj !

!

1 ! ! ! 2 = ⎡⎣ e 1 e 2 e 3 ⎤⎦ ! G

(6.4-13)

From equation (6.2-10), we can write: g p i g iq = δ pq !

(6.4-14)

Since metric tensors are tensors, their indices can be raised or

From the definition of the metric tensor given in equation

lowered using other metric tensors (see Section 6.3). Using gpi

(6.1-4), we then have:

to lower one index of g i q , we have:

!

(

)

! ! e i • e j = g i p g jq g p q = g i p g jq g p q !

(6.4-9)

!

gp i g iq = g qp = δ pq !

(6.4-15) 221

Therefore gqp is a rank two isotropic tensor and is known as the mixed metric tensor. From equation (6.4-1) we then obtain: ! ! g qp = e p • e q = δ pq ! ! (6.4-16)

Example 6-7 Show that ! ! ei • e j = δ ij !

The tensors gi j , g i j , and gji are all referred to simply as metric tensors.

whether or not the coordinate system is orthogonal. Solution:

Example 6-6

From equation (6.4-4) we have: ! ! ! ! ! ! ! ei • e j = ei • g j k ek = g j k ei • ek

Show that the metric tensor, reciprocal metric tensor, and mixed metric tensor of a given coordinate system are all

From equation (6.1-4) we have for a curvilinear coordinate

components of the same tensor. Solution:

system: !

Raising both subscripts of the metric tensor using equations

! ! gi k = ei • ek

and so using equation (6.2-6): ! ! ! ei • e j = g j k gik = δ ij

(6.1-4), (6.4-4), and (6.4-11), we have: ! ! ! ! ! ! ! gi j = ei • ej = gi p e p • gi q e q = gi p gi q e p • e q = gi p gi q g p q

whether or not the coordinate system is orthogonal.

Raising one subscript of the metric tensor using equations (6.1-4), (6.4-4), (6.4-1), and (6.4-16), we have: ! ! ! ! ! gi j = ei • ej = gi p e p • ej = gi p g jp

Example 6-8 Show that the equation for transforming the mixed metric

Therefore the metric tensor, reciprocal metric tensor, and

tensor from a curvilinear coordinate system x′ i to a

mixed metric tensor of a given coordinate system are all

curvilinear coordinate system x j is consistent with the

components of the same tensor.

metric equation: !

gpq = δ pq 222

!

Solution:

g

pq

∂g pq ∂x k

= − g pq

∂g pq ! ∂x k

(6.4-19)

The transformation equation for a mixed rank two tensor is given by: g pq =

!

∂ x′r ∂x

p

6.5!

∂x gr′ s ∂ x′ s q

From equation (6.4-16) we have: ! ! ! gr′ s = er′ • e′ s = δ r′ s

!

g qp =

We can use the metric tensor to determine the relation

between contravariant and covariant components of a vector in a curvilinear coordinate system. Let us consider a point vector ! A as given by: ! ! ! ! A = a i ei = aj e j ! (6.5-1)

and so: !

RELATION BETWEEN COVARIANT AND CONTRAVARIANT COMPONENTS OF A VECTOR

∂ x ′ r ∂x q s ∂ x ′ r ∂x q ∂x q δ r′ = p = = δ pq ∂x p ∂ x ′ s ∂x ∂ x ′ r ∂x p

Therefore the equation for transforming the mixed metric

in a curvilinear coordinate system. From equations (4.11-3),

tensor from a curvilinear coordinate system x′ i to a curvilinear coordinate system x j is consistent with the

(6.5-1), and (6.1-4) we have: ! ! ! ! ! A • ej = aj = a i ei • ej = a i gi j !

metric equation g qp = δ pq . ! !

Changing the indices of equation (6.4-14), we can write: gpq g pq = δ pp !

(6.4-17)

! or

∂x

k

g

pq

+ g pq

∂g pq ∂x

k

= 0!

and from equations (4.11-2), (6.5-1), and (6.4-11) we have: ! ! ! ! ! A • e i = a i = aj e j • e i = aj g ji ! (6.5-3)

! Therefore the contravariant and covariant components of A are associated through the metric tensor:

Taking the derivative of equation (6.4-17), we have:

∂g pq

(6.5-2)

(6.4-18)

!

aj = gi j a i = gj i a i !

(6.5-4)

!

a i = g j i aj = g i j aj !

(6.5-5) 223

The contravariant components a i and the associated covariant ! components aj of a vector A are then not independent. Given either set of components, the other can be obtained using the metric tensor. !

Any point vector can be written in terms of either

Solution:

! The point vector A in a curvilinear coordinate system is given by: ! ! ! A = a i ei = aj e j !

contravariant or covariant components. Since the contravariant and covariant components obviously are not equal, they must correspond to different bases. As we have seen in Sections 4.6 and 4.7, contravariant components a i correspond to covariant ! bases ei and covariant components aj correspond to ! contravariant bases e j . We will now show that this must be so. !

!

(

or

)

(

)

(6.1-8):

)

(

gi j = δ i j

!

Therefore: aj = δ i j a i = a j

!

for the rectangular coordinate system.

! ! ! ! ! A = aj e j = gj i a i e j = a i gi j e j = a i ei !

(

(6.5-6)

aj = gi j a i

In a rectangular coordinate system we obtain from equation

Using equations (6.5-4) and (6.5-5), we can rewrite

equation (6.5-1) as either: ! ! ! ! ! ! A = a i ei = g i j aj ei = aj g j i ei = aj e j !

!

where from equation (6.5-4) we have:

)

(6.5-7)

6.6!

As expected a change in vector component type is accompanied by a change in base vector type. Example 6-9

!

METRIC TENSORS IN ORTHOGONAL CURVILINEAR COORDINATE SYSTEMS For orthogonal curvilinear coordinate systems, we can use

equations (6.1-5) and (6.1-7) to write the square of the

Using the metric tensor, show that the covariant and ! contravariant components of any point vector A are equal in

differential distance ds as:

a rectangular coordinate system.

!

( ds )2 = g11 ( dx1 )

2

( )

+ g22 dx 2

2

( )

2

+ g33 dx 3 !

(6.6-1) 224

where all the terms in parentheses are squared. From equation

For orthogonal curvilinear coordinate systems, therefore, the

(6.1-7), we can also write:

components of both the metric tensor and the reciprocal metric

!

{g } ij

⎡ g11 ⎢ =⎢ 0 ⎢ 0 ⎣

tensor can easily be determined by simply knowing the

0 ⎤ ⎥ 0 ⎥! g33 ⎥ ⎦

0 g22 0

expression for ( ds ) . 2

(6.6-2)

Example 6-10 Determine the components of the metric and reciprocal

and from equation (6.2-2): !

G = g11 g22 g33 !

Equation (6.2-5) then gives:

!

{g } ij

⎡ 1 ⎢ g11 ⎢ =⎢ 0 ⎢ ⎢ 0 ⎢⎣

metric tensors and G for the cylindrical and spherical

(6.6-3)

1

0

0

g22

0

0

1

g33

coordinate systems using the expression for ( ds ) . 2

Solution:

⎤ ⎥ ⎥ ⎥! ⎥ ⎥ ⎥⎦

From Example 4-13 we have !

!

( ds )2 = ( dR )2 + ( R )2 ( dφ )2 + ( dz )2 Comparing this equation with equation (6.6-1) and using equation (6.6-5), we have:

g11 =

1 ! g11

g 22 =

1 ! g22

g 33 =

1 ! (6.6-5) g33

or !

with respect to

cylindrical coordinates:

(6.6-4)

and so: !

( ds )2

gi i =

1 ! gi i

gi j = 0 !

(no sum)!

!

g11 = 1 !

g22 = ( R ) !

!

g11 = 1 !

g 22 =

(6.6-7)

1

( R)

2

g33 = 1

!

g 33 = 1

and using equations (6.1-7) and (6.6-7):

(6.6-6) !

(i ≠ j ) !

2

gi j = 0 !

gi j = 0 !

(i ≠ j )

We then have from equation (6.6-3): 225

!

G = g11 g22 g33 = ( R )

2

Solution:

From Example 4-13 we have ( ds ) with respect to spherical coordinates: 2

!

!

g11 = 1 !

!

g = 1!

g22 = ( R ) !

g33 = ( R ) sin θ

2

11

g

22

=

1

( R )2

2

!

g = 33

2

1

( R )2 sin 2 θ

and using equations (6.1-7) and (6.6-7):

gi j = 0 !

gi j = 0 !

(i ≠ j )

We then have from equation (6.6-3): G = g11 g22 g33 = ( R ) sin 2 θ 4

Example 6-11

! If the covariant components of a point vector A are

( a1, a2 , a3 )

with respect to a cylindrical coordinate system,

what are the contravariant components with respect to this same coordinate system?

a i = g i j aj

and from Example 6-10, we have:

equation (6.6-5), we have:

!

!

( ds )2 = ( dR )2 + ( R )2 ( dθ )2 + ( R )2 sin 2 θ ( dφ )2 Comparing this equation with equation (6.6-1) and using

!

From equation (6.5-5), we have:

!

g11 = 1!

!

gi j = 0 !

g 22 =

1

( R)

2

g 33 = 1

!

(i ≠ j )

gi j = 0 !

Therefore !

a1 = g11 a1 + g12 a2 + g13 a3 = a1

!

a 2 = g 21 a1 + g 22 a 2 + g 23 a 3 =

!

a 3 = g 31 a1 + g 32 a2 + g 33 a3 = a3

1

( R )2

a2

⎛ ⎞ 1 and so the contravariant components are ⎜ a1, a , a 2 3⎟ . ( R )2 ⎝ ⎠ Example 6-12

! Determine the position vector r of a point P in terms of contravariant components for a spherical coordinate system. Solution: 226

From equation (6.5-5), we have: a i = g i j aj

!

g11 = 1!

!

gi j = 0 !

g 22 =

1

( R)

2

!

g 33 =

1

( R)

2

sin θ 2

!

(i ≠ j )

g = 0! ij

TRANSFORMING METRIC TENSORS FROM ORTHONORMAL COORDINATE SYSTEMS for

transforming

metric

tensors

from

orthonormal coordinate systems to curvilinear coordinate systems can be derived. If we consider a curvilinear coordinate

(x , x , x ) ( x′ , x′ , x′ ) , then

system

1

1

3

2

2

equation (6.1-8):

3

(6.7-2)

We can obtain a similar expression for the reciprocal

tensor is: !

∂x i ∂x j p q g′ ! ∂ x′ p ∂ x′ q

(6.7-3)

(6.1-8) and (6.6-4):

g′ p q = δ ′ p q !

(6.7-4)

Using equation (6.7-4) we can rewrite equation (6.7-3): !

gi j =

∂x i ∂x j p q δ′ ! ∂ x′ p ∂ x′ q

(6.7-5)

and so we have: !

and an orthonormal coordinate system equation (6.1-13) can be rewritten using

gi j =

For an orthonormal coordinate system we have from equations !

Equations

(6.7-1)

metric tensor. The transformation law for the reciprocal metric

spherical coordinate system are also ( R, 0, 0 ) , and we have: ! ! r = R eˆ R

!

∂ x′ p ∂ x′ p ! gi j = ∂x i ∂x j

!

From Example 4-11 we have r! = R eˆ R and so the covariant ! components of r for a spherical coordinate system are ( R, 0, 0 ) . Therefore the contravariant components of r! for a

6.7!

∂ x′ p ∂ x′ q δ ′p q ! ∂x i ∂x j

and so we have:

and from Example 6-10, we have: !

gi j =

!

!

gi j =

∂x i ∂x j ! ∂ x′ p ∂ x′ p

(6.7-6)

We can derive a similar expression for the mixed metric

tensor (as was shown in Example 6-8). The transformation law for the mixed metric tensor is: 227

!

gij

∂ x ′ p ∂x j q = g′p ! ∂x i ∂ x ′ q

(6.7-7)

Using equation (6.4-15), we can write: !

gij

∂ x ′ p ∂x j q = δ ′p ! ∂x i ∂ x ′ q

(6.7-8)

and so, as expected, we have: gij =

!

6.8! !

∂ x ′ p ∂x j ∂x j j i p = i = δi ! ∂x ∂ x ′ ∂x

(6.7-9)

GENERALIZED SCALAR PRODUCT Using the concepts of associated vectors and reciprocal

base vectors, a general definition of the scalar product valid for ! ! all coordinate systems can be obtained. If A and B are point vectors, we can write: ! ! ! ! ! ! ! A • B = a i ei • b j ej = a i b j ei • ej !

(

From equation (6.1-4), we have then: ! ! A • B = gi j a i b j ! !

)

! ! ! ! ! ! A • B = a i ei • bj e j = a i bj ei • e j !

(

!

)

(6.8-3)

From equation (6.4-16), we have then: ! ! ! A • B = a i bj δ ij = a i bi !

(6.8-4)

Of course we can also obtain: ! ! ! A • B = aj b j !

(6.8-5)

We can lower the index of a i in equation (6.8-4) using the metric tensor g i j : ! ! ! A • B = g i j aj bi !

(6.8-6)

and so we have finally: ! ! ! A • B = gi j a i b j = a i bi = aj b j = g i j aj bi !

(6.8-7)

Equation (6.8-7) is often referred to as the generalized scalar product. Since all the indices involved in the generalized scalar

(6.8-1)

product are dummy indices, the generalized scalar product of two point vectors is an invariant scalar.

(6.8-2)

6.8.1!

VECTOR MAGNITUDE

Equation (6.8-2) expresses the scalar product in terms of

!

contravariant components.

magnitude of a vector in any curvilinear coordinate system. For  a vector A , the magnitude in a curvilinear coordinate system is

!

! Expressing the vector B in terms of covariant components,

we can also write:

The generalized scalar product can be used to obtain the

given by: 228

! ! ! A = A• A !

!

(6.8-8)

or, using equation (6.8-7): ! ! A = gi j a i a j = a i ai = aj a j = g i j aj ai !

6.8.2! !

6.8.3! !

(6.8-9)

The generalized scalar product can be used to obtain the

angle between any two coordinate curves. The cosine of the ! ! angle between curves whose base vectors are ei and ej is given by:

ANGLE BETWEEN VECTORS

The generalized scalar product can also be used to obtain

the cosine of the angle θ between the directions of any two ! ! vectors A and B . From equation (1.16-27), we have: ! ! ! ! A• B A• B ! cosθ = ! ! = ! ! ! ! ! (6.8-10) A B A• A B• B

ANGLE BETWEEN COORDINATE CURVES

! ! ei • ej cosθ i j = ! ! = ei ej

!

! ! ei • ej ! ! ! ! ! (no sum)! (6.8-13) ei • ei ej • ej

or, using equation (6.1-4):

cosθ i j =

!

gi j gi i g j j

!

(no sum)!

(6.8-14)

Using equation (6.8-7), we can write: !

gi j a i b j

cosθ =

i

gi j a a !

=

j

i

gi j b b

aj b j aj a

j

bj b

j

=

j

Example 6-13

a i bi

=

i

a ai

b bi

g i j aj bi ij

g aj a i

! Show that the magnitude squared of a vector A is invariant

i

ij

g bj bi

to any coordinate transformation. ! (6.8-11)

! ! If the two vectors A and B are orthogonal, we must have: !

gi j a i b j = a i bi = aj b j = g i j aj bi = 0 !

Solution: From equation (6.8-7) for an xp′ coordinate system, we have: ! ! ! ( A )2 = A • A = a′p a′ p

(6.8-12)

From the transformation laws for ap′ and a′ p , we can write: !

a′p =

∂x k ak ! ∂ x′ p

a′ p =

∂ x′ p j a ∂x j 229

Multiplying: !

ap′ a′ p =

( A )2

∂x k ∂ x ′ p ∂x k j j k j j p a a = k p j j a k a = δ j a k a = aj a = ap a ∂ x ′ ∂x ∂x

is then invariant to coordinate transformation and so is

a scalar. This example can be compared with Example 1-6.

6.9! !

SCALE FACTORS

(4.10-6), we have: !

! ! ! ∂r ∂r ∂r ! • = ∂x i ∂x i ∂x i

(no sum)!

(6.9-1)

and from the definition of the metric tensor given in equation (6.1-4), we have for a curvilinear coordinate system:

! ! hi = ei • ei = gi i !

! ! !

!

(no sum)!

From equation (4.10-3) we then have: ! ! 1 ∂ r 1 ∂ r eˆi = ! = (no sum)! i hi ∂x i gi i ∂x

6.10!

or with equation (6.9-2): ! ! A = g11 a1 eˆ1 + g22 a 2 eˆ2 + g33 a 3 eˆ3 !

(6.10-2)

Using equations (4.12-7) and (6.9-2), the physical components

From the definition of scale factors given in equation

! ! ! hi = ei • ei = ei =

system. Such an expression is given in equation (4.12-5) for a ! vector A : ! A = h1 a1 eˆ1 + h2 a 2 eˆ2 + h3 a 3 eˆ3 ! ! (6.10-1)

of vectors in a curvilinear coordinate system are: !

a 1 = h1 a1 = g11 a1 !

(6.10-3)

!

a 2 = h2 a 2 = g22 a 2 !

(6.10-4)

!

a 3 = h3 a 3 = g33 a 3 !

(6.10-5)

and so for any curvilinear coordinate system, the physical (6.9-2)

components of vectors in terms of contravariant components are: !

(6.9-3)

PHYSICAL COMPONENTS OF TENSORS

Any tensor of rank one (point vector) can be expressed in

terms of physical components for a curvilinear coordinate

!

a i = hi a i = gi i a i !

(no sum)!

(6.10-6)

Using the reciprocal metric tensor and equation (6.10-6),

the physical components of vectors in terms of covariant components for a curvilinear coordinate system are: !

a i = g i j gi i aj !

(no sum on i )! (6.10-7)

This expression cannot be simplified. 230

!

Physical components of higher rank tensors are obtained

by operating with these tensors on lower rank tensors for

obtained from this equation by raising or lowering an index of the mixed tensor:

which the physical components are known. We will now consider an arbitrary mixed tensor T ji of rank two forming an

! T

ij

=

inner product with a vector v j to produce a vector u i : !

u i = T ji v j !

(6.10-8)

!

gi i i k g Tk j = gj j

gi i g T ik !(no sum on i or j )! (6.10-12) gj j j k

In nonorthogonal coordinate systems, the physical

Using equation (6.10-6) we can convert the contravariant vector

components for the mixed tensors T •i j and T j•i cannot be assumed to be equal. In such coordinate systems the right

components in equation (6.10-8) to physical components:

physical components T

!

ui

= T ji

gi i

v

j

gj j

!

(no sum for gi i or gj j )! (6.10-9)

or !

!

T

ij

=

ij

are defined by:

gi i i T ! gj j • j

(no sum)!

and the left physical components

ui =

gi i i j T v ! gj j j

(no sum for gi i or gj j )! (6.10-10)

ji

!

T=

gi i • i T ! gj j j

ji

(6.10-13)

T are defined by:

(no sum)!

(6.10-14)

Using the quotient law and comparing equations (6.10-8) and

The relation between right and left physical components can be

(6.10-10), we see the physical components for mixed tensors of

obtained using:

rank two in curvilinear coordinate systems are: ! !

T

ij

=

gi i i T ! gj j j

(no sum)!

T j•i = gj p T pi = gj p g i q T•qp !

! (6.10-11)

The physical components of covariant and contravariant

rank two tensors in curvilinear coordinate systems can be

(6.10-15)

We then have from equations (6.10-14) and (6.10-13): ji

! !

!

T=

gi i gqq g gi q T gj j g pp j p !

pq

!

(no sum in square root) (6.10-16) 231

6.11!

PHYSICAL COMPONENTS OF TENSORS IN ORTHOGONAL COORDINATE SYSTEMS

!

g = 0 for i ≠ j , and otherwise g = 1 gi i (no sum). Therefore, ij

ii

the physical components of vectors in terms of contravariant components given in equation (6.10-6) are: !

a

i

= hi a = gi i a ! i

i

(no sum)!

ij

!

T

ai gi i

=

ai ! hi

(no sum)!

(6.11-1)

! !

ai =

=

Ti j hi i Tj = = hi hj T i j ! hj hi hj

T

i1 i2 !j1 j 2 !

⎡ gi1 i1 gi2 i2 ! ⎤ 2 i i ! =⎢ ⎥ T j11j22! ! g g ! ⎢⎣ j1 j1 j2 j2 ⎥⎦

(6.11-2) ! T

(no sum)!

i1 i2 !j1 j 2 !

(no sum)!

(6.11-3) i1 i2 !j1 j 2 !

(6.11-4)

= ⎡⎣ gi1 i1 gi2 i2 ! g j1 j1 g j2 j2 !⎤⎦



1 2T i1 i2 !j1 j 2 ! !

(no sum) (6.11-8)

1

= ⎡⎣ gi1 i1 gi2 i2 ! g j1 j1 g j2 j2 !⎤⎦ 2 T i1 i2 !j1 j2 ! ! (no sum)

!

(6.11-9)

or

From equations (6.10-11) and (6.10-12), the physical

components of rank two tensors become:

(6.11-6)

(no sum)! (6.11-7)

!

i

a a ! = hi gi i

(no sum)!

1

! T i

ij

!

ai = hi a i = gi i a i !

(6.11-5)

general definitions are then: !

We then also have: !

gi i gj j

= gi i gj j T i j ! (no sum)!

tensors of higher rank for orthogonal coordinate systems. The

components given in equation (6.10-7) are:

ai =

Ti j

A similar process can be used to extend equation (6.11-5) to

and the physical components of vectors in terms of covariant

!

gi i i T = gj j j

=

or

For orthogonal coordinate systems we have gi j = 0 and

!

T

!

T

i1 i2 !j1 j 2 !

=

hi1 hi2 ! hj1 hj 2 !

! ! T ji11ji22!

(no sum)! (6.11-10)

232

!

T

i1 i2 !j1 j 2 !

−1

= ⎡⎣ hi1 hi2! hj1 hj 2! ⎤⎦ Ti1 i2 !j1 j2 ! !

! !

6.12!

(no sum) (6.11-11)

T

i1 i2 !j1 j 2 !

= hi1 hi2! hj1 hj 2! T i1 i2 !j1 j2 ! !

(no sum)

!

(6.11-12)

Show that for an orthogonal coordinate system

ji

T =T

!

gj p = 0 !

( j ≠ p)

!

gi q = 0 !

(i ≠ q )

!

gi i =

(4.12-11): .

6.13!

ij

=

hi′ ∂ x ′ i a hj ∂x j

j

gi′i ∂ x ′ i a gj j ∂x j

=

j

(no sum for h or g )!

(6.12-1)

TENSOR EQUATIONS

Let F = 0 be a tensor equation in the curvilinear coordinate system x1, x 2 , x 3 such that:

Using equation (6.6-6), equation (6.10-16) becomes:

gi i gi i g j j T gj j g j j gi i

a′ i =

!

1 gi i

T=

ij

!

For an orthogonal coordinate system, we have:

ji

The law for transforming physical components of a point ! vector A from one curvilinear coordinate system xj to another

!

Solution:

!

!

curvilinear coordinate system xi′ can be obtained from equation

Example 6-14

gi i gi i g gi i T gj j g j j j j

TRANSFORMATION LAW FOR PHYSICAL COMPONENTS OF VECTORS

ij

=T

ij

!

(

(

)

)

q! q! F A ipj! , Bipj! ,! = 0 !

(6.13-1)

q! q! where A ipj! , Bipj! , ! are all tensors. If we transform this tensor

equation to some ( x1′, x2′ , x3′ ) coordinate system, we will obtain: !

(

)

p q! p q! F A′i j! , B′i j! ,! = 0 !

(6.13-2)

Therefore the tensor equation will still be valid. In fact, any tensor equation that is true for one coordinate system will be true for all other coordinate systems in the same dimensional 233

(

)

space. We can also conclude then that, if all components of a

is valid in the x1, x 2 , x 3 coordinate system, then the same

tensor are zero for a given coordinate system, all components of

equation is valid for any x ′1, x ′ 2 , x ′ 3 coordinate system of

the tensor will be zero in any other coordinate system.

the given coordinate space.

!

For a tensor equation to be valid, all terms in the tensor

)

Solution:

equation must have the same tensor characteristics; otherwise

If the tensor equation is to be valid in the

the equation is meaningless. Characteristics that must be the

( x′ , x′ , x′ ) 1

2

3

coordinate system we should have:

same for each tensor term include the following four listed in Table 6-1.

(

Tq′rp = 2 Tr′qp

!

To determine if this equation is true we will transform the

(

tensors in this equation back to the x1, x 2 , x 3

)

coordinate

system: ∂ x ′ p ∂x k ∂x j i ∂ x ′ p ∂x j ∂x k i Tj k = 2 Tk j ∂x i ∂ x ′ r ∂ x ′ q ∂x i ∂ x ′ q ∂ x ′ r

! or Table 6-1! Tensor characteristics that must be the same for each term of a valid tensor equation. Absolute and relative tensors are discussed in Chapter 8.

!

∂ x ′ p ∂x j ∂x k i i i q r T j k − 2 Tk j = 0 ∂x ∂ x ′ ∂ x ′

(

)

For this equation to be generally true we must have: !

T jik = 2 Tkij

Example 6-15 Show that if the tensor equation: !

T jik = 2Tkij

234

Chapter 7 Differentiation of Tensor Functions

a,i k

∂a i = k + Γ ij k a j ∂x

235

!

In this chapter we will consider the differentiation of

tensors functions in general curvilinear coordinate systems.

! ! ! dA = d a i ei = d aj e j !

(7.1-3)

! ! ! ! ! dA = da i ei + a i d ei = daj e j + aj d e j !

(7.1-4)

(

!

) (

)

or

7.1! !

DERIVATIVE OF A VECTOR

!

The procedure for calculating the derivative of a scalar

(tensor of rank zero) is independent of any coordinate system. This follows from the fact that the single component of a scalar

where the differentials can be expressed as: da i =

!

is independent of any coordinate system. The derivative of a scalar is simply the ordinary derivative. If φ is a scalar function, the differential dφ is given by: dφ =

!

∂φ dx k ! ∂xk

(7.1-1)

∂a i k dx ! ∂x k

! ! ∂ei d ei = k dx k ! ∂x ! ! ∂A k dA = k dx ! ∂x

!

!

daj =

∂aj

dx k !

(7.1-5)

! ∂e j ! d e j = k dx k ! ∂x

(7.1-6)

∂x

k

(7.1-7)

The partial derivative ∂φ ∂xk is a covariant vector (see Section 1.15).

7.1.1!

!

!

The procedure for calculating the derivative of a tensor of

RECTILINEAR COORDINATE SYSTEMS

! ! If ei and e j are base vectors for a rectilinear coordinate

rank one or higher is dependent upon the coordinate system.

system, these base vectors will not vary from point to point in

The derivative of such a tensor may not simply be the

the coordinate system, and so we have: ! ! ! ! d ei = d e j = 0 !

derivative of the components of the tensor, as we will now show. We will begin by calculating the differential of a point ! vector (tensor of rank one). For an arbitrary point vector A in a

(

)

curvilinear coordinate system x1, x 2 , x 3 , we can write: ! ! ! ! A = a i ei = aj e j ! (7.1-2) ! and so the differential dA is given by:

(7.1-8)

Therefore, in the case of rectilinear coordinate systems such as ! the rectangular coordinate system, the differential dA given in equation (7.1-4) becomes: ! ! ! ! dA = da i ei = daj e j !

(7.1-9) 236

Using equations (7.1-7) and (7.1-5), we can write: ! ! ∂ A k ∂a i k ! ∂aj ! dA = k dx = k dx ei = k dx k e j ! ! ∂x ∂x ∂x

! (7.1-10)

and so: !

(7.1-11)

(7.1-14)

∂a′ r ∂ x ′r ∂a i ! = ∂x k ∂xi ∂x k

(7.1-15)

and so: !

! i ∂ A ∂a ! ∂aj ! j = ei = k e ! ∂x k ∂x k ∂x

∂ ⎡ ∂ xr′ ⎤ = 0! ∂xk ⎢⎣ ∂xi ⎥⎦

!

From equations (7.1-15) and (1.15-2), we see that the

Taking the derivative of a vector in a rectilinear coordinate

components of the ordinary derivative of a point vector in a

system reduces, therefore, to that of ordinary differentiation of

rectilinear coordinate system transform as and therefore are

the components of the vector (as was noted in Section 2.1). This

contravariant components of a point vector. The ordinary

is termed the ordinary derivative of a vector.

derivative of a point vector in a rectilinear coordinate system

!

is then a tensor of rank one (point vector).

To determine the tensor rank of the ordinary derivative of

a point vector in a rectilinear coordinate system, we can consider the transformation of the contravariant components a i ! of a point vector A from one rectilinear coordinate system

( x1, x2 , x3 ) to another ( x1′, x2′ , x3′ ) :

!

position in the coordinate system. In other words, for a

∂ x′ a′ = r a i ! ∂xi r

∂ a′ r ∂ ⎡ ∂ x ′r i ⎤ ∂ = a ⎥ = ai ⎢ ∂xk ∂xk ⎣ ∂xi ⎦ ∂xk

!

CURVILINEAR COORDINATE SYSTEMS ! ! If ei and e j are base vectors for a curvilinear coordinate

system, these base vectors will generally be a function of (7.1-12)

curvilinear coordinate system the base vectors corresponding to points separated by even a very small distance along a

Calculating the derivative of this equation, we have: !

7.1.2!

⎡ ∂ xr′ ⎤ ∂ x ′r ∂a i ⎢ ∂x ⎥ + ∂x ∂x ! (7.1-13) ⎣ i⎦ i k

coordinate curve can be distinctly different. Therefore we will generally have: ! ! ! d ei ≠ 0 !

Since the rectilinear axes are constant in direction, we must

! ! de j ≠ 0 ! (7.1-16) ! and, as a result, the differential dA will be as given in equation

have:

(7.1-4). From equations (7.1-7) and (7.1-4), we can write: 237

!

!

! ! ∂A k ! ! dA = k dx = da i ei + a i d ei ! ∂x ! ! ∂A k ! ! dA = k dx = daj e j + aj d e j ! ∂x

Using equations (7.1-5) and (7.1-6), we then have: ! ! ! ∂ A k ∂a i k ! i ∂ ei ! dA = k dx = k dx ei + a dx k ! ∂x ∂x ∂x k !

! ! ! ∂ A k ∂aj ∂e j k k !j dA = k dx = k dx e + aj k dx ! ∂x ∂x ∂x

(7.1-17)

(including the effects due to variation in the base vectors). From (7.1-18)

equations (7.1-7) and (7.1-23) we then have for a curvilinear

(7.1-19)

coordinate system: ! ! ∂ A k ∂Λi ! ∂Λ j ! ! dA = k dx = k dx k ei = k dx k e j ! ∂x ∂x ∂x

(7.1-20)

Therefore ! !

! ! ! i ∂aj ! j ∂e j ∂ A ∂a ! i ∂ ei = ei + a = e + aj k ! ∂x k ∂x k ∂x k ∂x k ∂x

and so we can write: ! i ∂ A ∂Λ ! ∂Λ j ! j ! = ei = k e ! ∂x k ∂x k ∂x !

(7.1-21)

! To obtain an expression for the differential dA in

curvilinear coordinate systems that is analogous to equation ! (7.1-9) for dA in rectilinear coordinate systems, we can write: ! ! ! ! dA = dΛ i ei = dΛ j e j ! (7.1-22) or

(7.1-24)

(7.1-25)

To determine ∂Λi ∂x k and ∂Λ j ∂x k , we can change

dummy indices in equation (7.1-25) and form the scalar products as in equations (4.11-2) and (4.11-3): ! r r i ∂Λ ∂ A ! i ∂ Λ ! ! i ∂Λ i ! • e = k er • e = k δ r = k ≡ a,i k ! ∂x k ∂x ∂x ∂x !

! ∂ A ! ∂Λ s ! s ! ∂Λ s s ∂Λ j • ej = k e • ej = k δ j = k ≡ aj, k ! ∂x k ∂x ∂x ∂x

(7.1-26)

(7.1-27)

where we have used equation (6.4-1), and where we have made ! ∂Λ ! ∂Λ j ! dA = k dx k ei = k dx k e j ! ∂x ∂x i

!

and dΛ j are the contravariant and covariant components, ! respectively, of the differential dA in curvilinear coordinates

(7.1-23)

! where Λ i and Λj are not components of the vector A (in fact a i ! and aj are the components of A ). Instead the quantities dΛ i

the following definitions: i

!

a,i k ≡

∂Λ ! ∂x k

(7.1-28)

238

! !

aj, k ≡

∂ Λj ∂x k

!

(7.1-29)

The derivatives a,i k and aj, k are both known as covariant

derivatives. Unfortunately, this usage of the term ‘covariant’ is different from the usage defining tensor components as either ‘covariant’ or ‘contravariant’. A comma in the subscript of a

!

aj, k =

∂ai ∂x k

δ ji + ai

! ∂e i ! • ej ! ∂x k

(7.1-34)

and so the covariant derivatives of components of point vectors in curvilinear coordinate systems are given by: ! ∂ej ! i ∂a i i j ! a, k = k + a •e ! (7.1-35) ∂x ∂x k

tensor is a notation for covariant differentiation. From

! ∂e i ! aj, k = + ai k • ej ! ∂x k ∂x ∂aj

equation (7.1-25) the covariant derivatives a,i k and aj, k can be ! seen to be components of ∂ A ∂x k .

!

!

The covariant derivatives given in equations (7.1-35) and

Rewriting equation (7.1-25) using the definitions given in

equations (7.1-28) and (7.1-29), we have: ! ∂A !j i ! ! k = a, k ei = a j, k e ! ∂x

(7.1-36) provide a way of calculating the derivative of a point vector in a curvilinear coordinate system. (7.1-30)

Using equation (7.1-21) after changing dummy indices, we can write equations (7.1-26) and (7.1-27) as: ! ! j ∂ej ! i ∂ A ! i ∂a ! ! i i j a, k = k • e = k ej • e + a •e ! ! ∂x ∂x ∂x k !

aj, k

! ! ∂ai ! i ! ∂e i ! ∂A ! = k • ej = e • ej + ai k • ej ! ∂x ∂x k ∂x

(7.1-31)

!

! ∂ej ! i ∂a j i j = k δj +a •e ! ∂x ∂x k

! !

!

(7.1-33)

From equations (7.1-7) and (7.1-30), we can write: ! ! ∂A k ! ! dA = k dx = a,i k dx k ei = aj, k dx k e j ! (7.1-37) ∂x

Using equations (7.1-35) and (7.1-36), we obtain: ! ∂ej ! i ⎤ k ! ! ⎡ ∂a i j ! dA = ⎢ k + a • e ⎥ dx ei ! ∂x k ⎣ ∂x ⎦

(7.1-32)

or, from equation (6.4-1): a,i k

(7.1-36)

!

! ! ⎡ ∂aj ∂e i ! ⎤ k ! j dA = ⎢ k + ai k • ej ⎥ dx e ! ∂x ⎣ ∂x ⎦

(7.1-38)

(7.1-39)

To determine the tensor character of the covariant

derivatives, we can use equations (7.1-22) and (7.1-37) to write: 239

!

! ! ! dA = dΛ i ei = a,i k dx k ei !

(7.1-40)

!

! ! ! dA = dΛj e j = aj, k dx k e j !

(7.1-41)

In a rectangular coordinate system, we have from equation (2.4-2):

! ! ∂r ! d r = k dx k = dx i ei ! ∂x

!

From equations (7.1-40) and (7.1-41), we have:

(7.1-45)

!

dΛi = a,i k dx k !

(7.1-42)

Using equations (7.1-44) and (7.1-45), we can write for a

!

dΛ j = aj, k dx k !

(7.1-43)

rectangular coordinate system: ! ! ! r,ik dx k ei = dx i ei !

Since dΛ i and dΛj are contravariant and covariant vector ! components, respectively, of dA as can be seen from equation !

(7.1-22), and since dx k is an arbitrary contravariant vector component in equations (7.1-42) and (7.1-43), we know from the quotient law that the covariant derivative a,i k is a mixed tensor of rank two and the covariant derivative aj, k is a covariant tensor of rank two. Covariant derivatives are tensors and so the rate of change in a physical quantity that they express is independent of any coordinate system as is expected.

7.1.3! !

POSITION VECTORS IN CURVILINEAR COORDINATE SYSTEMS

! The position vector differential d r is a point vector that

(7.1-46)

Therefore we must have: r,ik = δ ki !

!

(7.1-47)

Equation (7.1-47) is a tensor equation and so is valid for all coordinate systems. We may conclude then that equation (7.1-45) is valid not only for all rectangular coordinate systems, but also for any curvilinear coordinate system. Therefore equation (7.1-45) is valid for a curvilinear coordinate system (as was assumed in Section 4.8).

7.2! !

CHRISTOFFEL SYMBOLS The covariant derivatives given in equations (7.1-35) and

can be written for a curvilinear coordinate system using

(7.1-36) are generally written in another form using Christoffel

equation (7.1-37): ! ! ∂r ! d r = k dx k = r, ik dx k ei ! ! ∂x

symbols. The definition of Christoffel symbols can be obtained by differentiating equation (6.4-1) to obtain: (7.1-44) 240

!

∂ !i ! ∂ i ! k e • ej = k δj =0 ∂x ∂x

(

)

( )

! (7.2-1)

! ∂ ej

! ∂ei ! !i e • k = − k • ej ! ∂x ∂x

components of the vector. The terms containing Γjik (7.2-2)

! Γjik ≡ e i •

! ∂ ej ∂x

k

!

(7.2-3)

are a measure of the curvature of the coordinate axes. Using equations (7.2-3) and (7.2-2), we can rewrite the

expressions for covariant derivatives of vectors given in equations (7.1-35) and (7.1-36) in terms of Christoffel symbols: !

! !

a,i k

∂a i = k + Γjik a j ! ∂x

aj, k =

∂aj ∂x

k

− Γjik ai !

(7.2-4)

(7.2-5)

From equation (7.2-3), we see that if the base vectors do

not vary with position, we have: !

Γjik = 0 !

equations (7.2-4) and (7.2-5) provide a correction to this along coordinate axes that can occur in a curvilinear coordinate

where Γjik is called the Christoffel symbol of the second kind. We can see from equation (7.2-3), that the Christoffel symbols !

in

derivative that is required due to the variation in base vectors

We can define: !

Christoffel symbols are all zero. In this case, equations (7.2-4) and (7.2-5) reduce to the ordinary differentiation of the

or !

If the coordinate system is rectilinear, therefore, the

(7.2-6)

system. !

The Christoffel symbols can be expressed in terms of the

metric tensor. Using the relation: ! ! e i = g i p ep ! ! we can write equation (7.2-3) as: ! ∂ ej ⎤ i ip ⎡! ! Γj k = g ⎢ ep • k ⎥ ! ∂x ⎦ ⎣ From equation (4.8-5) we have: ! ! ∂r ej = j ! ! ∂x Taking the derivative, we obtain: ! ! ∂ ej ∂ ek ∂2 r ∂2 r ! ! = = = ∂x k ∂x k ∂x j ∂x j ∂x k ∂x j

(7.2-7)

(7.2-8)

(7.2-9)

(7.2-10)

Therefore equation (7.2-8) can be rewritten as: 241

Γjik

!

!

! ! ⎡ ! ∂ ej ! ∂ ek ⎤ ⎢ ep • ∂x k + ep • ∂x j ⎥ ! ⎣ ⎦

(7.2-11)

where !

or

Γjik

1 =g 2 ip

1 =g 2 ip

(

) (

)

! ! ! ⎡ ∂ e!p • e!j ∂ ep • e!k ! ∂ep ! ∂ep ⎤ ⎢ + − ej • k − ek • j ⎥ ! (7.2-12) k ∂x j ∂x ∂x ⎥ ⎢⎣ ∂x ⎦

Γj k p ≡

1 ⎡ ∂ g j p ∂ gk p ∂ g j k + − p 2 ⎢⎣ ∂x k ∂x i ∂x

⎤ ⎥! ⎦

(7.2-17)

is called the Christoffel symbol of the first kind. From this equation we see that if the metric tensor is constant, then the

where we have added and subtracted the last two terms of

Christoffel symbols will be zero.

equation (7.2-12). Using equation (7.2-10) again for the last two

!

terms of equation (7.2-12), we obtain: ! ! ! ! ⎡ ∂ ep • e!j ∂ ep • ek ! ∂e!k ! ∂ej ⎤ i ip 1 ⎢ + − ej • p − ek • p ⎥ ! (7.2-13) ! Γj k = g 2 ⎢ ∂x k ∂x j ∂x ∂x ⎥ ⎣ ⎦

Christoffel symbol of the first kind in terms of base vectors: ! ! ∂ej ! Γj k p = ep • k ! (7.2-18) ∂x

or

equation (7.2-9), we can also write equation (7.2-18) in the form: ! ! ∂2 r ∂r Γj k p = • ! ! (7.2-19) ∂x j ∂x k ∂x p

(

!

) ( (

)

) (

) (

! ! ! ! ! ⎡ ∂ e!j • ep ej • ek ∂ • e ∂ e p 1 k Γjik = g i p ⎢ + − 2 ⎢ ∂x k ∂x j ∂x p ⎣

) ⎤⎥ ! ⎥⎦

(7.2-14)

!

⎧1 Γjik = g i p ⎨ ⎩2

⎡ ∂ g j p ∂ gk p ∂ g j k ⎤ ⎫ ⎢ ∂x k + ∂x j − ∂x p ⎥ ⎬ ! ⎣ ⎦⎭

Equation (7.2-18) can be compared with equation (7.2-3). Using

!

From equation (6.1-4) we have finally: (7.2-15)

kind in terms of the metric tensor. This equation can also be written: !

= g Γj k p ! ip

Various notations for Christoffel symbols can be found in

different books. One fairly common notation is: !

⎪⎧ i ⎨ jk ⎪⎩

⎪⎫ i ⎬ = Γj k ! ⎪⎭

(7.2-20)

!

[ j k, p] = Γj k p !

(7.2-21)

Equation (7.2-15) gives the Christoffel symbol of the second

Γjik

From equations (7.2-16) and (7.2-8) we can write the

! (7.2-16)

The Christoffel symbols of the first and second kind do

not transform as tensors and therefore are not tensors. To 242

show this we will first transform the base vectors in equation

Christoffel symbols can be treated as tensors. Quantities that

(7.2-3) using equations (4.9-6) and (4.8-9):

satisfy the transformation given in equation (7.2-25) are sometimes referred to as components of an affine connection.

⎡ ∂x i ! p ⎤ ∂ ⎡ ∂ x′ q ! ⎤ e′ ! Γjik = ⎢ e • ′ ⎥ p k ⎢ j q⎥ ∂ x ∂x ∂x ′ ⎣ ⎦ ⎣ ⎦

!

(7.2-22)

Example 7-1

Taking the derivative in the second term: !

Γjik

Show that:

! ∂2 x′ q ! ⎤ ⎡ ∂xi ! p ⎤ ⎡ ∂ x ′ q ∂e′q ∂ x ′ r =⎢ e • + ′ p ⎥⎦ ⎢ ∂x j ∂ x ′ r ∂x k ∂x k ∂x j eq′ ⎥ ! (7.2-23) ⎣ ∂ x′ ⎣ ⎦

Solution:

or !

Γjik

∂x i ∂ x ′ q ∂ x ′ r = ∂ x ′ p ∂x j ∂x k +

!

! ⎡ ! p ∂e′q ⎤ ⎢e′ • ∂ x ′ r ⎥ ⎣ ⎦ ∂x i

∂2 x′ q

∂ x ′ ∂x ∂x p

k

j

From equation (7.2-25) we have:

! ! ⎡⎣ e′ p • eq′ ⎤⎦ !

Γjik =

∂x i ∂ x ′ q ∂ x ′ r p ∂x i ∂2 x ′ q ! Γ ′q r + ∂ x ′ p ∂x j ∂x k ∂ x ′ q ∂x k ∂x j

(7.2-24)

!

∂2 x′ q

∂ x ′ q ∂x k ∂x j

!

(7.2-25)

∂ x ′ s ∂x i ∂ x ′ q ∂ x ′ r p ∂ x ′ s ∂x i ∂2 x ′ q ∂ x′ s i Γ = Γ ′q r + jk ∂x i ∂x i ∂ x ′ p ∂x j ∂x k ∂x i ∂ x ′ q ∂x k ∂x j

or !

(7.2-26)

in equation (7.2-25) prevents Christoffel symbols from transforming as tensors. Only in special cases when the coordinate transformation is affine will this term be zero so that

∂x i ∂ x ′ q ∂ x ′ r p ∂x i ∂2 x ′ q Γ ′q r + ∂ x ′ p ∂x j ∂x k ∂ x ′ q ∂x k ∂x j

Multiplying by ∂ x ′ s ∂x i , we obtain: !

We see then that the term: ∂x i

Γjik =

!

Using equations (7.2-3) and (6.4-1), we have finally: !

∂ x′ s i ∂2 x′ s ∂ x′ q ∂ x′ r s = Γ − Γ ′q r jk ∂x k ∂x j ∂x i ∂x j ∂x k

!

∂ x′ s ∂ x′ q ∂ x′ r p ∂ x′s ∂2 x′ q ∂ x′ s i Γ = Γ ′q r + jk ∂x i ∂ x ′ p ∂x j ∂x k ∂ x ′ q ∂x k ∂x j

and so: !

q r 2 q ∂ x′ s i s ∂ x′ ∂ x′ p s ∂ x′ Γ = δ Γ + δ ′ ′ ′ jk p qr q ∂x i ∂x j ∂x k ∂x k ∂x j

Therefore 243

∂2 x′ s ∂ x′ s i ∂ x′ q ∂ x′ r s = Γ − Γ ′q r jk ∂x k ∂x j ∂x i ∂x j ∂x k

!

or since the metric tensor is symmetric: ∂ gj p

!

7.3! !

RELATIONS FOR CHRISTOFFEL SYMBOLS

∂x k

= Γj k p + Γp k j !

(7.3-4)

Using equation (7.3-2) we can also write:

We will now derive a number of useful relations for

∂ gj p

!

∂x

k

= gi p Γj ik + gi j Γpik !

(7.3-5)

Christoffel symbols. Using the symmetry of the metric tensor

Similar expressions can be derived for the reciprocal metric

given in equation (6.1-6), we see that the Christoffel symbols

tensor:

given in equations (7.2-17) and (7.2-16) are symmetric with respect to the indices j and k : ! !

Γj k p = Γk j p !

Γjik = Γki j !

!

∂g r i = − g j r g p i Γj k p + Γp k j ! ∂x k

(7.3-6)

!

∂g r i jr i pi r ! k = − g Γj k − g Γp k ∂x

(7.3-7)

(

(7.3-1)

Although the Christoffel symbols are not tensors, the

metric tensor can still be used to “lower” or “raise” Christoffel

)

as shown in Example 7-2.

symbol subscripts. This can be shown using equations (7.2-16) Example 7-2

and (6.2-10): ! !

gi q Γjik = gi q g i p Γj k p = δ qp Γj k p = Γj k q !

Show that:

(7.3-2)

From the definition of Γj kp given in equation (7.2-17), we

!

can write: ! !

1 Γj k p + Γp k j = 2

⎡ ∂ gj p ∂ gk p ∂ gj k ⎤ 1 ⎡ ∂ gp j ∂ gk j ∂ gp k ⎤ ⎢ ∂x k + ∂x j − ∂x p ⎥ + 2 ⎢ ∂x k + ∂x p − ∂x j ⎥ ⎣ ⎦ ⎣ ⎦ ! (7.3-3)

∂g r i ∂x k

=−g g jr

pi



jkp

)

+ Γp k j !

∂g r i jr i pi r k = − g Γj k − g Γp k ∂x

Solution: From equation (6.2-10) we have: !

gj p g p i = δ ji 244

Differentiating with respect to xk : ∂gj p

!

∂x k

g + gj p pi

∂g pi

From equation (6.2-4) we have:

or !

! ∂g pi

gj p

∂x

k

=−

∂gj p ∂x

k

!

g j r gj p

!

δ pr

∂g

∂g pi ∂x k

∂g r i ∂x

k

∂x k

=−g g jr

=−g g jr

!

= − g j r g pi

pi

pi

∂gj p

!

∂x k

∂G = Gkp! ∂gk p

From the definition of the reciprocal metric tensor given in

∂x k

equation (6.2-5), we have: !

gk p =

k

(

= − g j r g pi Γj k p + Γp k j

)

From equation (7.2-16) and the symmetry of the reciprocal metric tensor, we then have: ∂g jr i pi r k = − g Γj k − g Γp k ∂x

Gkp ! G

(7.3-10)

Using equation (7.3-9) we can write equation (7.3-10) as: !

gk p =

1 ∂G 1 ∂G ∂x j ! = G ∂gk p G ∂x j ∂gk p

(7.3-11)

or !

gk p

∂gk p ∂x j

ri

!

(7.3-9)

∂gj p

Using equation (7.3-4), we have equation (7.3-6): ∂g r i

(7.3-8)

since G k p does not contain gk p (by definition of a cofactor).

∂x k

∂gj p ∂x

(no sum on k )!

obtain:

Multiplying by g : pi

G = gk p G k p !

Taking the derivative of equation (7.3-8) with respect to gk p , we

g pi

jr

!

It is possible to derive a useful relation between the

Christoffel symbols and the determinant G of the metric tensor.

=0

∂x k

!

=

1 ∂G ! G ∂x j

(7.3-12)

We can rewrite equation (7.3-12) using equation (7.3-4): !

(

)

g k p Γk j p + Γp j k =

1 ∂G ! G ∂x j

(7.3-13) 245

From equation (7.2-16) and the symmetry of the reciprocal

For Γj k p not to be zero in an orthogonal coordinate system,

metric tensor, we then obtain:

either two or three of the indices must be equal. We have then

Γkkj + Γppj =

!

for:

1 ∂G ! G ∂x j

(7.3-14)

!

Γj k p = Γj j j !

( j = k = p , no sum)!

Replacing the dummy index k with p in the first term of equation (7.3-14) and using the symmetry of the Christoffel

Using equations (7.2-17) and (6.9-2):

symbols given in equation (7.3-1), we have finally:

!

Γppj = Γjpp =

!

7.4! !

(

)

( )

∂ ln G 1 1 ∂G 1 ∂ G ! (7.3-15) = = j 2 G ∂x j ∂x j G ∂x

! !

equations (6.1-4) and (6.1-7), we have: ! ! ! ( i ≠ j )! gi j = ei • ej = 0 !

For (7.4-1)

Therefore, using equation (7.2-17), the Christoffel symbols of the first kind are given by: ( j ≠ k ≠ p )!

(7.4-4)

Γj k p = Γj j p !

( j = k ≠ p , no sum)!

(7.4-5)

! ! we have, since gj p = ej • ep = 0 when j ≠ p :

For orthogonal curvilinear coordinate systems, the

Γj k p = 0 !

(no sum)!

For

Christoffel symbols assume a somewhat simplified form. From

!

∂h j 1 ⎡ ∂ gj j ∂ gj j ∂ gj j ⎤ 1 ∂ gj j + − = = h j 2 ⎢⎣ ∂x j ∂x j ∂x j ⎥⎦ 2 ∂x j ∂x j

!

!

CHRISTOFFEL SYMBOLS IN ORTHOGONAL CURVILINEAR COORDINATE SYSTEMS

Γj j j =

(7.4-3)

(7.4-2)

!

Γj j p =

1 2

∂h j ⎡ ∂ gj p ∂ gj p ∂ gj j ⎤ 1 ∂ gj j + − = − = − h j ⎢ ∂x j ∂x j ∂x p ⎥ 2 ∂x p ∂x p ⎣ ⎦ ( j ≠ p , no sum)! (7.4-6)

Γj k p = Γj k j !

( j = p ≠ k , no sum)!

(7.4-7)

! ! we have, since gj k = ej • ek = 0 when j ≠ k : ! !

Γj k j = Γk j j =

1 2

∂h j ⎡ ∂ gj j ∂ gk j ∂ gj k ⎤ 1 ∂ g j j + − = = h j ⎢ ∂x k ∂x j ∂x j ⎥ 2 ∂x k ∂x k ⎣ ⎦ ( j ≠ k , no sum)!

(7.4-8) 246

!

Expressions for the Christoffel symbols of the second kind

Example 7-3

in an orthogonal coordinate system can be obtained using

Determine the Christoffel symbols of the first and second

equations (7.2-16), (6.6-7), (6.6-6), and the expressions given

kind for the cylindrical coordinate system.

above: !

!

Γjpk = g p i Γj k i = 0 !

Γj jj = g j j Γj j j

(

Γjpj = g p p Γj j p =

2 g22 = gφφ = ( R ) !

hj ∂h j 1 1 ∂ gj j Γj j p = − = − p 2 gp p 2 gp p ∂x p hp ∂x

!

g11 = g RR = 1 !

g 22 = gφφ =

( j ≠ p , no sum)!

!

gi j = 0 !

gi j = 0 !

(7.4-10)

( )

(

1 1 ∂ gj j 1 ∂ lngj j Γj k j = k = gjj 2 g j j ∂x 2 ∂x k ( j ≠ k , no sum)!

(7.4-11)

)=

1 ∂h j hj ∂x k

(7.4-12)

We can also write equation (7.4-12) in another form using

equations (6.6-6) and (6.4-19): ! !

=

From Example 6-10 we have:

g11 = gRR = 1 !

!

Γkjj

)

!

! Γjjk = Γkjj = g j j Γj k j =

Γjjk

(

The cylindrical coordinates are: x1, x 2 , x 3 = ( R, φ , z ) .

)

(no sum)!

!

!

Solution:

(7.4-9)

1 1 ∂gj j 1 ∂ lngj j 1 ∂h j = Γj j j = = j = j gj j 2 gj j ∂x 2 ∂x hj ∂x j

! !

( j ≠ k ≠ p )!

∂g j j 1 j j ∂ gj j 1 = g Γj k j = g = − gjj 2 2 ∂x k ∂x k

(7.4-13)

( R)

2

!

g 33 = g zz = 1

(i ≠ j )

From equations (7.4-2) through (7.4-8), we then have: !

Γj k p = 0 !

!

Γj j j =

!

Γ111 = ΓRRR =

1 ∂ g RR =0 2 ∂R

!

Γ222 = Γφφφ =

1 ∂ gφφ =0 2 ∂φ

( j ≠ k ≠ p)

1 ∂ gj j ! 2 ∂x j

jj

( j ≠ k , no sum)!

1

g33 = gzz = 1

(no sum)

247

1 ∂ gzz =0 2 ∂z

!

Γ212 = Γ122 = Γφ Rφ = ΓRφφ =

1 ∂ gφφ =R 2 ∂R

!

Γ232 = Γ322 = Γφ zφ = Γzφφ =

1 ∂ gφφ =0 2 ∂z

1 ∂ gRR =0 2 ∂φ

!

Γ313 = Γ133 = Γz R z = ΓR z z =

1 ∂ gz z =0 2 ∂R

Γ113 = ΓRRz = −

1 ∂ gRR =0 2 ∂z

!

Γ323 = Γ233 = Γzφ z = Γφ z z =

1 ∂ gz z =0 2 ∂φ

!

Γ221 = Γφφ R = −

1 ∂ gφφ = −R 2 ∂R

Therefore all Γj k p = 0 except Γ221 , Γ212 , and Γ122 . From

!

Γ223 = Γφφ z = −

1 ∂ gφφ =0 2 ∂z

nonzero Christoffel symbols of the second kind for the

!

1 ∂ gz z Γ331 = ΓzzR = − =0 2 ∂R

!

Γ333 = Γzzz =

!

Γj j p = −

!

Γ112 = ΓRRφ = −

!

!

1 ∂ gj j ! 2 ∂x p

Γ332 = Γz zφ = −

( j ≠ p , no sum)

equations (7.4-9) through (7.4-12), we then have as the only cylindrical coordinate system:

1 ∂ gz z =0 2 ∂φ

1 ∂ gj j = ! 2 ∂x k

( j ≠ k , no sum)

!

Γj k j = Γk j j

!

Γ121 = Γ211 = ΓRφ R = Γφ R R =

1 ∂ gRR =0 2 ∂φ

!

Γ131 = Γ311 = ΓRz R = ΓzRR =

1 ∂ gRR =0 2 ∂z

( j ≠ p , no sum)

!

Γjpj = g p p Γj j p !

!

Γ221 = ΓφφR = g11 Γ221 = −R

!

Γj jk = Γkjj = g j j Γj k j !

( j ≠ k , no sum)

!

Γ212 = Γ122 = ΓφφR = ΓRφφ = g 22 Γ212 =

1

( R)

2

( R) =

1 R

Example 7-4 ! If r is a position vector in the cylindrical coordinate system, show that r, ik = δ ki . 248

Solution: From Example 4-10 we have: ! ! r = r R eˆR + rφ eˆφ + r z eˆz = R eˆR + z eˆz !

rφ = 0 !

rR = R !

rz = z

r, zR =

!

r, zφ

∂r z = + ΓRzφ r R + Γφφz rφ + Γzzφ r z = 0 + 0 + 0 + 0 = 0 ∂φ

!

r, zz

∂r z = + ΓRzz r R + Γφzz rφ + Γzzz r z = 1+ 0 + 0 + 0 = 1 ∂z

From Example 7-3, the only nonzero Christoffel symbols of

Therefore we have:

the second kind are: !

ΓφφR = −R !

∂r z + ΓRzR r R + ΓφzR rφ + ΓzzR r z = 0 + 0 + 0 + 0 = 0 ∂R

!

ΓφφR = ΓRφφ =

1 R

Using equation (7.2-4) we can write:

!

r, ik = δ ki

7.5!

COVARIANT DERIVATIVE OF A TENSOR

r, RR =

∂r R + ΓRR r R + ΓφRR rφ + ΓzRR r z = 1+ 0 + 0 + 0 = 1 ∂R

!

r, Rφ =

∂r + ΓRRφ r R + ΓφφR rφ + ΓzRφ r z = 0 + 0 + 0 + 0 = 0 ∂φ

in equations (7.2-4) and (7.2-5) can be extended so as to provide

!

r, Rz =

∂r R + ΓRRz r R + ΓφRz rφ + ΓzzR r z = 0 + 0 + 0 + 0 = 0 ∂z

for the covariant derivative of a general tensor:

!

! ! !

r,φR =

r,φφ = r,φz =

R

R

φ

∂r φ + ΓRR r R + ΓφφR rφ + ΓzφR r z = 0 + 0 + 0 + 0 = 0 ∂R

!

The expressions for covariant derivatives of vectors given

the covariant derivatives of higher rank tensors. We have then

! !

2! Tqp11qp2 !, k

=

2! ∂Tqp11qp2 !

∂x

k

! + Γrpk1 Tqr1 qp22!! + Γrpk2 Tq1p1qr2 ! !

− Γqr1 k Tr pq12p!2! − Γqr2 k Tq1p1r p!2! !!

(7.5-1)

φ

∂r 1 + ΓRφφ r R + Γφφφ rφ + Γzφφ r z = 0 + R + 0 + 0 = 1 ∂φ R φ

∂r φ + ΓRz r R + Γφφz rφ + Γzzφ r z = 0 + 0 + 0 + 0 = 0 ∂z

!

Covariant differentiation of any tensor results in a new

tensor for which the covariant rank is one greater than that of the original tensor. Covariant differentiation of tensors obeys the following sum and product rules: 249

(A

! !

(A

p1 p2 ! q1 q2 ! +

)

p1 p2 ! r1 r2 ! q1 q2 ! B s1 s2 ! , k

Bqp11qp22!!

)

,k

! p1 p2 ! = Aqp11qp22!, k + B q1 q2 !, k !

! r1 r2 ! p1 p2 ! r1 r2 ! = A qp11qp22!, k B s1 s2 ! + A q1 q2 ! B s1 s2 !, k !

(7.5-2) (7.5-3)

!

The Kronecker delta can be considered a constant under

covariant differentiation (see Example 7-6). Therefore

δ i,j k = 0 !

!

(7.5-4)

From Section 5.7.2 we know that contraction is equivalent to

Example 7-5 Determine the covariant derivative with respect to xp of the following tensors:

multiplication by the Kronecker delta. From equation (7.5-4) we see that contraction can occur before or after covariant differentiation.

a.! aklij

Example 7-6

b.! a i j

Show that the Kronecker delta δ ij can be considered a

c.!

ai

constant under covariant differentiation.

d.! akl

Solution:

Solution:

From equation (7.5-1), the covariant derivative δ i,j k is given

From equation (7.5-1) we have:

by:

a.!

aki jl, p

b.! a i,j p c.!

a i, p

=

∂aklij ∂x

i p + Γr p

akr lj

+ Γr jp

aklir

∂a i j = + Γrip a r j + Γr jp a i r ∂x p ∂a i i r = p + Γrp a ∂x

d.! ak l, p =

∂ ak l ∂x p

− Γ krp arl − Γ lrp akr

− Γ krp

a rlij

− Γ lrp

ij akr

!

δ i,j k

∂δ ij j r r j j j = k + Γrk δ i − Γi k δ r = 0 + Γi k − Γi k = 0 ∂x

Therefore the Kronecker delta δ ij can be considered a constant under covariant differentiation. Example 7-7 Show that the covariant derivative of a scalar β is simply the ordinary derivative: 250

!

∂β ∂x j

β, j =

Solution: Let T jik = a ij bk . From equation (7.5-1) we then have:

Solution: We will represent the scalar β as the scalar product of two ! ! point vectors A and B : !

β = a bi

!

! !

!

⎡ ∂a ⎤ β, j = ⎢ j + Γkij a k ⎥ bi + a i ⎣ ∂x ⎦

)

(

!

⎡ ∂bi ⎤ k ⎢⎣ ∂x j − Γi j bk ⎥⎦

∂ i i k k i = j a bi + Γk j a bi − Γi j a bk ∂x

(

i j

=

∂ i i r r i r i l a j bk + Γ r l a j bk − Γ j l ar bk − Γ kl a j br ∂x

(

)

! or

indices i and k in the last term, we have:

!

∂β ∂β i k i k j + Γk j a bi − Γk j a bi = ∂x ∂x j

Example 7-8 Show that the covariant derivative of the product a ij bk

a ij bk

)

,l

=

∂a ij ∂x

l

bk +

∂bk r i r r i r i l a j + Γ r l a j bk − Γ j l ar bk − Γ kl a j br ∂x

Therefore:

Using β = a i bi and exchanging the role of the dummy

β, j =

k ,l

We can now write:

or, using equations (7.2-4) and (7.2-5): i

(a b )

!

β, j = a i, j bi + a i bi, j

i

( )

or

i

Then we can write:

∂ i i r r i r i l T j k + Γ r l T j k − Γ j l T rk − Γ kl T j r ∂x

T jik, l =

!

(

⎡ ∂a ij = ⎢ l + Γ ri l a rj − Γ jrl ari ,l ⎢⎣ ∂x

)

(a b ) i j

7.6! !

a ij bk

k ,l

⎤ ⎡ ∂bk ⎤ i r ⎥ bk + ⎢ l − Γ kl br ⎥ a j ⎣ ∂x ⎦ ⎥⎦

= a j,i l bk + a ji bk, l

RICCI’S THEOREM The covariant derivative of the metric tensor can be

calculated using equation (7.5-1):

follows the same product rule as the ordinary derivative. 251

!

gp q, k =

∂g p q ∂x

k

− Γprk gr q − Γqrk gp r !

(7.6-1)

gp q, k =

∂g p q ∂x

k

− gr q Γprk − gr p Γqrk !

(7.6-2)

!

gp q, k =

∂x k



∂g p q ∂x k

= 0!

Example 7-9

!

in equation (6.4-14) yields: g p j + g gp j, k = δ

!

g i p, k = 0 !

(7.6-5)

(7.6-6)

! !

gij, k =

=

g ij, k



1 j = ! aj, k a ,k A

From equation (6.8-9) we have: !

We can then write: g i,j k

! A

Solution: i j, k

Using equations (7.6-3) and (7.6-4), we must have: !

(7.6-8)

! ! ! For a point vector A = a i ei = aj e j show that:

Taking the covariant derivative of the product g i p gp j = δ ji given !

= g i p Tp j, k !

indices of the covariant derivative of a tensor. Moreover, the

(7.6-3)

(7.6-4)

ip

,k

either before or after covariant differentiation.

g ij, k = δ j,i k = 0 !

g i p, k

)

and so the metric tensor can be used to raise or lower the

From equations (6.4-16) and (7.5-4), we also have: !

(

T ij, k = g i p Tp j

operation of raising or lowering the indices can be performed

Using equation (7.3-5), we then obtain: ∂g p q

For an arbitrary tensor Tp j we then have:

!

Since the metric tensor is symmetric, we have: !

!

i j, k

= 0!

(7.6-7)

and so: !

The fact that the covariant derivative of the metric tensor

under all covariant differentiation.

! A

,k

=

1 2 gi j a i a j

(g

i j, k

a i a j + gi j a i, k a j + gi j a i a j, k

)

From Ricci’s theorem we have gi j, k = 0 . Lowering indices:

is zero is known as Ricci’s theorem or Ricci’s lemma. The metric tensor can be considered, therefore, to be a constant

! A = gi j a i a j

!

! A

,k

=

1 ! 2 A

(a

j, k

a j + a i, k a i

) 252

Changing the dummy index i to j in the last term, we then have: !

! A

7.7!

! ! If A is a vector function defined on a space curve C , then ! the derivative of A along the curve C is given by: ! ! dA ∂ A dx k ! ! = (7.7-1) dt ∂x k dt where t is a parameter of C . Using equation (7.1-30), we have: ! dA dx k ! dx k ! j ! = a,i k ei = aj, k e ! (7.7-2) dt dt dt

δa dx ! = a,i k δt dt

and

!

k δ a i ∂a i dx k i j dx ! = + Γj k a δ t ∂x k dt dt

(7.7-6)

!

δ aj ∂aj dx k dx k = k − Γjik ai ! δ t ∂x dt dt

(7.7-7)

!

δ a i da i dx k ! = + Γjik a j δt dt dt

(7.7-8)

!

δ aj da j dx k = − Γjik ai ! δt dt dt

(7.7-9)

or

From equation (7.7-3) we see that δ a i δ t are contravariant components and δ aj δ t are covariant components of the vector ! dA dt . In coordinate systems where the base vectors do not !

(7.7-3)

where !

(7.7-5)

δ aj δ t are called the absolute derivatives or the intrinsic derivatives of a i and aj , respectively. Both equations (7.7-4) and (7.7-5) are inner products. Using equations (7.2-4) and (7.2-5), we can also write:

ABSOLUTE DERIVATIVE OF A TENSOR

i

δ aj dx k = aj, k ! δt dt

The terms δ a i δ t

1 = ! aj, k a j ,k A

We can write equation (7.7-2) in the form: ! i dA δ a ! δ aj ! j ! = e = e ! dt δt i δt

!

vary with position, Γj ik will be zero. In such cases we see from

k

(7.7-4)

equations (7.7-8) and (7.7-9) that the absolute derivative will equal the ordinary derivative. Absolute differentiation of tensor products follows the rules: 253

(

δ a i bk

!

(

δ aj b k

! !

δt

δt

) = δa

i

δt

) = δa

j

δt

bk + a i

δ bk ! δt

b k + aj

δ bk ! δt

c.! (7.7-10)

(7.7-11)

one can be obtained using equations similar to equations (7.7-4) and (7.7-5): !

δt

=

p p ! a q11q22!, k

dx k ! dt

d.! β where β is a scalar Solution: Using equations (7.7-12), (7.7-8), and (7.7-9), and the results

Absolute differentiation of tensors of rank greater than

δ a qp11qp22!!

(7.7-12)

of Example 7-5, we can write: p δ a ki jl daklij i j dx ⎤ a.! = + ⎡⎣ Γrip aklrj + Γrjp aklir − Γ krp arlij − Γ lpr akr ⎦ dt δt dt

b.!

δ a i j da i j dx p = + ⎡⎣ Γrip a r j + Γr jp a i r ⎤⎦ δt dt dt

c.!

δ a k l da k l dx p = − ⎡⎣ Γ krp arl + Γ lpr akr ⎤⎦ δt dt dt

Absolute derivatives of tensors are tensors of the same kind (rank and variant). From equations (7.7-12) and (7.6-7), we see that absolute derivatives of the metric tensors gi j , g i j , and gji are zero since covariant derivatives of these tensors are zero:

δ gi j

δ g i j δ gji = = = 0! δt δt δt

!

ak j

δβ δβ dx p dβ d.! = = δ t δ xp dt dt

(7.7-13)

Example 7-10 Determine the absolute derivative of each of the following tensors: a.! a ki jj b.! a i j 254

Chapter 8 Tensors in Curvilinear Coordinates

! ! 1 ! ij k A × B = ε ai bj ek G

255

In this chapter we will consider the derivatives of base

Note that since the base vectors eˆq′ of the rectangular coordinate

vectors of curvilinear coordinate systems. We will introduce the

system do not vary with position, their derivatives are zero.

very useful permutation tensor and its application to the vector

From equation (4.8-10), we have: ! ∂ei ∂ 2x ′ q ∂x k ! ∂ 2x ′ q ∂x k ! = e = ek ! ! k ∂x j ∂x i ∂x j ∂ x ′ q ∂x j ∂x i ∂ x ′ q

!

product in curvilinear coordinate systems. We also will discuss relative tensors and pseudotensors. Finally, we will present the gradient operation in curvilinear coordinate systems.

8.1! !

Changing indices of the Christoffel symbol in equation (7.2-25),

DERIVATIVES OF BASE VECTORS OF CURVILINEAR COORDINATE SYSTEMS We will now derive several useful expressions for the

we can write: !

Γi kj

∂x k ∂ x ′ q ∂ x ′ r p ∂x k ∂ 2x ′ q ! = Γ′ + ∂ x ′ p ∂x i ∂x j q r ∂ x ′ q ∂x j ∂x i

from equation (7.2-6):

We will let x1, x 2 , x 3 be a curvilinear coordinate system and

!

( x1′, x2′ , x3′ )

)

be a rectangular coordinate system. The covariant

base vectors of the curvilinear coordinate system can be expressed in terms of the unit base vectors of the rectangular q ! ∂ x′ ˆ ei = eq′ ! ∂x i

Taking the derivative, we have: ! ∂ei ∂ 2x ′ q ∂ 2x ′ q ˆ ! = e = eˆq′ ! ′ q ∂x j ∂x i ∂x j ∂x j ∂x i

(8.1-1)

Γ ′qpr = 0 !

(8.1-5)

and so equation (8.1-4) becomes: !

coordinate system using equation (4.8-9): !

(8.1-4)

Since the primed coordinate system is rectangular, we have

derivatives of base vectors of curvilinear coordinate systems.

(

(8.1-3)

Γi kj

∂x k

∂ 2x ′ q ! = ∂ x ′ q ∂x j ∂x i

Therefore equation (8.1-3) can be rewritten as: ! ∂ei k ! ! ! j = Γi j ek ∂x

(8.1-6)

(8.1-7)

or from equation (7.2-9) (8.1-2) !

∂ 2r k ! ! j i = Γi j ek ∂x ∂x

(8.1-8) 256

!

Equations (8.1-6), (8.1-7), and (8.1-8) are tensor equations

Using equations (8.1-13) and (7.2-16) we then have the

that are valid for all curvilinear coordinate systems that exist in

derivatives of contravariant base vectors of a curvilinear

the same space as the rectangular coordinate system (Euclidean

coordinate system in Euclidean space: ! ∂e i !k i !k ip ! ! j = − Γj k e = − g Γj k p e ∂x

coordinate space). Using equations (7.2-10), (8.1-7), and (7.2-16), we have the derivatives of covariant base vectors of a curvilinear coordinate system in Euclidean coordinate space: ! ! ∂ei ∂ej ! ! k ! kp Γi j p ek = Γi j p e p ! ! (8.1-9) j = i = Γi j ek = g ∂x ∂x !

Example 8-1

( x′ , x′ , x′ ) is an orthonormal curvilinear coordinate system and ( x , x , x ) is a curvilinear coordinate system, If

To obtain derivatives of contravariant base vectors of

(8.1-10)

3

2

3

Γi j s =

∂ x ′ r ∂ 2x ′ r ∂x s ∂x j ∂x i

Solution: From equation (7.3-2) we obtain: !

gk s Γi jk = Γi j s

From equation (8.1-6) we can then write:

From equations (8.1-11) and (6.4-1), we can write:

! ⎡ ∂e i ⎤ ! ! ! ! ⎢ j ⎥ • ek = − Γkij = − Γs ij δ ks = − Γj is δ ks = ⎡⎣ − Γj is e s ⎤⎦ • ek ! (8.1-12) ⎣ ∂x ⎦

!

gk s Γi kj = Γi j s =

∂x k ∂ 2x ′ q gk s ∂ x ′ q ∂x j ∂x i

From equation (6.7-2), we have:

or !

2

show that: !

Using equations (8.1-10) and (8.1-7) we have: ! ∂e i ! !i s ! s i i i ! ! j • ek = − e • Γk j es = − Γk j δ s = − Γk j = − Γj k (8.1-11) ∂x

1

1

curvilinear coordinate systems in Euclidean coordinate space, we can use equation (7.2-2) to write: ! ! ∂e i ! ! i ∂ek ! • ek = − e • j ! ∂x j ∂x

(8.1-14)

! ∂e i i !s ! j = − Γj s e ∂x

(8.1-13)

!

gk s =

∂ x′ r ∂ x′ r ∂x k ∂x s 257

and so: !

Γi j s

!

∂x k ∂ 2x ′ q ∂ x ′ r ∂ x ′ r ∂ x ′ r ∂ 2x ′ q ∂ x ′ r = = ∂ x ′ q ∂x j ∂x i ∂x k ∂x s ∂ x ′ q ∂x j ∂x i ∂x s = δ q′ r

!

Γi j s =

∂x

q

! = Γprq er !

(8.2-1)

For the case of p = q with p = q = i , we obtain: ! ∂ei i ! j ! k ! ! ! (i ≠ j ≠ k , no sum)! (8.2-2) i = Γi i ei + Γ i i ej + Γi i ek ∂x ! Using ei = hi eˆi (no sum) from equation (4.10-3), we can write: !

∂ 2x ′ q ∂ x ′ r ∂x j ∂x i ∂x s

Therefore !

! ∂ep

∂ x ′ r ∂ 2x ′ r ∂x s ∂x j ∂x i

∂( hi eˆi )

!

This equation can be compared with equation (8.1-6).

∂x

i

= Γi ii hi eˆi + Γ iji h j eˆj + Γiki hk eˆk

(i ≠ j ≠ k , no sum)!

!

(8.2-3)

With the expressions for Christoffel symbols given in equations

8.2!

!

DERIVATIVES OF UNIT BASE VECTORS OF ORTHOGONAL CURVILINEAR COORDINATE SYSTEMS Since

unit

base

vectors

of

orthogonal

(7.4-10) and (7.4-11), equation (8.2-3) becomes: !

curvilinear

coordinate systems can vary in direction from point to point along coordinate curves, these base vectors must generally be considered as variables under the operation of differentiation. The rate at which the direction of a unit vector changes with coordinate position can be obtained by calculating the derivative of the unit vector with respect to the coordinate. In Euclidean coordinate space, we have from equation (8.1-9):

hi

∂eˆi ∂x

i

+

!

∂hi

h i ∂h i hi ∂hi 1 ∂hi eˆi = ˆ ˆ ˆ h e − j j i hi ei − j k hk ek 2 2 hi ∂x ∂x ∂x ∂x ( hk ) hj

( )

i

!

(i ≠ j ≠ k , no sum)!

(8.2-4)

1 ∂hi 1 ∂hi ˆ e − eˆk ! (i ≠ j ≠ k , no sum)! j hj ∂x j hk ∂x k

(8.2-5)

or ! !

∂eˆi ∂x i

=−

For the case of p ≠ q with p = j and q = i in equation

(8.2-1), we have: ! ∂ej i ! j ! k ! ! ! i = Γj i ei + Γj i ej + Γj i ek ∂x

( i ≠ j ≠ k , no sum)! (8.2-6) 258

! Using ei = hi eˆi (no sum) from equation (4.10-3), we can write:

!

(

∂ hj eˆj ∂x i

)=Γ

It is possible to verify equation (8.2-10) using equation (8.2-5), and to verify equation (8.2-11) using equations (8.2-9) and

i ji

hi eˆi + Γj ji

hj eˆj + Γj ki

hk eˆk

(8.2-5).

(i ≠ j ≠ k , no sum)! (8.2-7)

!

With equations (7.4-12) and (7.4-9), equation (8.2-7) becomes: !

hj

∂eˆj ∂x i

+

∂hi 1 ∂hj ˆj = 1 e ˆ h e + hj eˆj i i hj ∂x i hi ∂x j ∂x i

∂hj

!

( i ≠ j , no sum)!

(8.2-8)

( i ≠ j , no sum)!

(8.2-9)

or ! !

∂eˆj ∂x

i

=

1 ∂hi ˆ ei ! hj ∂x j

From equations (8.2-5) and (8.2-9), we can obtain the

derivatives of unit base vectors for an orthogonal curvilinear coordinate system (see Table 8-1). !

Two useful relations for the derivatives of unit base

vectors in orthogonal curvilinear coordinate systems are: !

!

∂( h2 h3 eˆ1 ) ∂x1

+

∂( h1 h3 eˆ2 ) ∂x 2

∂ ⎛ eˆj ⎞ eˆi × i ⎜ ⎟ = 0! hi ∂x ⎝ hj ⎠

+

∂( h1 h2 eˆ3 ) ∂x 3

Table 8-1! Derivatives of unit base vectors for an orthogonal curvilinear coordinate system. Example 8-2 Determine the derivatives of the unit base vectors for the

= 0!

(8.2-10)

cylindrical and spherical coordinate systems. Solution:

(no sum on j )!

(8.2-11)

From Table 4-2 the scale factors for a cylindrical coordinate system are: 259

!

! ! !

!

hR = 1 !

hφ = R !

hz = 1

Example 8-3

Therefore, using the results in Table 8-1, we have for the

! Determine the differential position vector d r in terms of

cylindrical coordinate system:

physical, contravariant, and covariant components for a

∂eˆR = 0! ∂R

∂eˆR = eˆφ ! ∂φ

∂eˆR =0 ∂z

cylindrical coordinate system.

∂eˆφ

∂eˆφ

∂eˆφ

! From Example 4-16, the position vector r in terms of

∂R

∂eˆz ∂R

= 0!

= 0!

∂φ ∂eˆz ∂φ

= − eˆR !

∂z ∂eˆz

= 0!

∂z

Solution:

=0

physical components for a cylindrical coordinate system is: =0

! r =r

!

eˆR + r φ eˆφ + r z eˆz = R eˆR + z eˆz

From Table 4-2 the scale factors for a spherical coordinate

! ! The differential d r of the position vector r in terms of

system are:

physical components for a cylindrical coordinate system is

hR = 1 !

hφ = R sin θ

hθ = R !

Therefore, using the results in Table 8-1, we have for the

then given by: ! ! ! ∂r ∂r ! ∂r ! dr = dR + dφ + dz ∂R ∂φ ∂z

spherical coordinate system: !

∂eˆR = 0! ∂R

∂eˆR = eˆθ ! ∂θ

∂eˆR = sin θ eˆφ ∂φ

!

∂eˆθ = 0! ∂R

∂eˆθ = − eˆR ! ∂θ

∂eˆθ = cosθ eˆφ ∂φ

∂eˆφ

∂eˆφ

!

R

∂R

= 0!

∂θ

= 0!

∂eˆφ ∂φ

= − sin θ eˆR − cosθ eˆθ

and so: ∂( R eˆR + z eˆz ) ∂( R eˆR + z eˆz ) ! ∂( R eˆR + z eˆz ) dr = dR + dφ + dz ∂R ∂φ ∂z

! or

∂eˆz ∂eˆz ∂eˆ ∂eˆ ! d r = dR eˆR + R dR R + z dR + R dφ R + z dφ ∂R ∂R ∂φ ∂φ

! !

+ R dz

∂eˆz ∂eˆR + dz eˆz + z dz ∂z ∂z 260

Using the derivatives of unit base vectors from Example 8-2, ! we have d r in terms of physical components for a cylindrical coordinate system: ! ! d r = dR eˆR + R dφ eˆφ + dz eˆz

! From Example 4-15, we have the physical components of r : !

Example 8-4

! Determine the differential position vector d r in terms of physical, contravariant, and covariant components for a spherical coordinate system. Solution:

! From Example 6-12, the position vector r in terms of

=r

R

= R!

r

2

=rθ =0!

r

3

=rφ =0

given by: ! ! ! ∂r ∂r ! ∂r dr = dR + dθ + dφ ∂R ∂θ ∂φ

!

and so: ∂( R eˆR ) ∂( R eˆR ) ! ∂( R eˆR ) dr = dR + dθ + dφ ∂R ∂θ ∂φ

!

! 2 d r = dR eˆ R + ( R ) dφ eˆφ + dz eˆ z

!

R

physical components for a spherical coordinate system is

and from equation (6.11-3) and the scale factors from Table ! 4-2, we have d r in terms of covariant components for a cylindrical coordinate system:

r

! ! The differential d r of the position vector r in terms of

From equation (6.11-4) and the scale factors from Table 4-2, ! we have d r in terms of contravariant components for a cylindrical coordinate system: ! ! d r = dR eˆR + dφ eˆφ + dz eˆz

! r = R eˆR

!

or !

∂eˆ ∂eˆ ∂eˆ ! d r = dR eˆR + R dR R + R dθ R + R dφ R ∂R ∂θ ∂φ

Using the derivatives of unit base vectors from Example 8-2, ! we have d r in terms of physical components for a spherical coordinate system: ! ! d r = dR eˆR + R dθ eˆθ + Rsin θ dφ eˆφ

contravariant components for a spherical coordinate system

From equation (6.11-4) and the scale factors from Table 4-2, ! we have d r in terms of contravariant components for a

is:

spherical coordinate system: 261

! d r = dR eˆR + dθ eˆθ + dφ eˆφ

!

Although the permutation symbol is not a tensor, it is

and from equation (6.11-3) and the scale factors from Table ! 4-2, we have d r in terms of covariant components for a

possible to define a new quantity containing the permutation

spherical coordinate system:

a determinant equation using equations (6.2-1) and (4.13-2). We

8.3!

!

PERMUTATION TENSOR

we can examine how it transforms from a curvilinear

(

)

( x′ , x′ , x′ ) 1

2

2

G = ( J ) G′ ! 2

to a curvilinear coordinate

system x1 , x 2 , x 3 . From the expression for a Jacobian given in

!

J=

G G′

=

∂ x′i ∂ x′ ∂ x′ k J= 1 ε i′j k ! ∂x ∂x 2 ∂x 3

∂ x′i ∂ x′ ∂ x′ J εp q r = p ε i′j k ! ∂x ∂x q ∂x r

!

G εp q r =

(8.3-1)

∂ x′i ∂ x′ j ∂ x′ k ∂x p ∂x q ∂x r

G ′ ε i′j k !

(8.3-5)

Defining:

Using equations (4.13-2), (1.18-22), and (1.18-24): j

(8.3-4)

equation (8.3-4), we have: !

j

1 ! J′

Replacing J in equation (8.3-2) with the expression given in

equation (4.13-6), we have: !

(8.3-3)

or, using equation (4.14-8):

To determine if the permutation symbol ε p q r is a tensor,

coordinate system

symbol that is a tensor. We begin by writing equation (6.1-13) as then have:

! 2 2 d r = dR eˆR + ( R ) dθ eˆθ + ( R ) sin 2 θ dφ eˆφ

!

!

!

!

k

(8.3-2)

ε!p q r ≡ G ε p q r !

(8.3-6)

we have finally: ∂ x′i ∂ x′ j ∂ x′ k ε!i′j k ! ∂x p ∂x q ∂x r

Due to the presence of the J factor in equation (8.3-2), we see that the permutation symbol ε p q r does not transform as and,

!

therefore, is not a tensor. In a later section, we will show that

We see from equation (8.3-7) that ε!p q r transforms as and

ε p q r is a relative tensor.

therefore is a covariant tensor of rank three. The tensor ε!p q r is

ε!p q r =

(8.3-7)

known as the permutation tensor or the alternating tensor. 262

8.4!

RECIPROCAL PERMUTATION TENSOR

! Using the metric tensor to raise the indices of the tensor ε!p q r , we have: !

ε! i j k = g i p g j q g k r ε!p q r !

!

1 = gi p g j q g k r ε p q r ! G

!

1⎤ ⎡ ε! i j k = ⎢ε i j k ⎥ G ! G⎦ ⎣

ε! i j k = g i p g j q g k r G ε p q r !

(8.4-2)

From equation (1.17-36) we have ε p q r = ε

pqr

and so equation

(8.4-6)

and so the reciprocal permutation tensor can be defined by: !

ε! i j k =

equation (8.3-6), we can write: !

(8.4-5)

With equation (8.4-5) we can rewrite equation (8.4-3) as:

(8.4-1)

The tensor ε! i j k is known as the reciprocal permutation tensor. Both ε!p q r and ε! i j k are isotropic tensors (see Section 5.4). From

ε ij k

!

1 G

ε ij k !

(8.4-7)

It is possible to derive a number of equations for

permutation and metric tensors, such as: !

ε! i j k ε! l m n gi l g j m gk n = 6 !

(8.4-8)

!

ε! i j k ε! l m n g i l g j m g k n = 6 !

(8.4-9)

The determinant 1 G given in equation (6.2-14) can be expanded in terms of the permutation symbol ε p q r as in

!

ε! i j k ε! l m n g j m gk n = 2 g i l !

(8.4-10)

equations (1.18-20) and (1.18-21):

!

ε! i j k ε! l m n g j m g k n = 6 gi l !

(8.4-11)

(8.4-2) can be written: !

(

ε! i j k = g i p g j q g k r ε p q r

g11

!

1 = g 21 G g 31

g12

g13

22

23

g

g 32

g

)

G!

(8.4-3)

and, corresponding to equation (8.4-1):

=g g g ε 1p

2q

3r

pqr

!

(8.4-4)

g 33

Using equations (8.4-4), (1.18-22), and (1.18-24), we can then write:

!

ε!i j k = gi p g j q gk r ε! p q r !

(8.4-12)

Example 8-5 Show that: 263

ε! i j k ε! l m n gi l g j m gk n = 6

!

Solution:

Since

Using the metric tensors to lower indices, we have:

ε!

!

ij k

ε!

lmn

gi l g j m gk n = ε!

ij k

ε!i j k = ε!

ij k

1 G

!

( x′ , x′ , x′ ) 1

2

3

(

)

G ′ eˆi′ × eˆj′ • eˆk′ !

(8.5-2)

is an orthonormal curvilinear coordinate

G′ = 1 !

(8.5-3)

Therefore, rewriting equation (8.5-2):

ε i j k G εp j k δ pi = ε i j k ε p j k δ pi = ε i j k ε p j k δ pi

Using equations (1.18-37) and (5.4-4), we then have:

!

ε!p q r =

∂ x′ j ∂ x′ k ∂ x′i ˆ ˆ e × e • eˆk′ ! ′ ′ i j ∂x p ∂x q ∂x r

or, from equation (4.8-9): " " " ! ε!p q r = ep × eq • er !

ε! i j k ε! l m n gi l g j m gk n = 2 δ ip δ pi = 2 δ ii = 6

!

∂ x′i ∂ x′ j ∂ x′ k ∂x p ∂x q ∂x r

system, we also have from equations (6.1-8) and (6.2-1):

ε!p j k δ pi

With equations (8.4-7) and (8.3-6), we can write:

ε! i j k ε! l m n gi l g j m gk n =

ε!p q r =

!

(8.5-4)

(8.5-5)

Equation (8.5-5) is valid for all curvilinear coordinate systems.

8.5! !

PERMUTATION TENSOR RELATIONS Several useful relations involving permutation tensors will

(

now be derived. We will let x ′1, x ′ 2 , x ′ 3

(

)

be an orthonormal

curvilinear coordinate system and x1, x 2 , x 3

)

be a curvilinear

This equation can be considered to be a generalization of equation (1.17-42), and it is often used as a definition of permutation tensors. !

Another permutation tensor relation can be derived by

coordinate system. From equations (8.3-7) and (8.3-6), we can

using equations (8.3-6), (8.3-4), and (8.5-3) to write:

write:

!

!

ε!p q r =

∂ x′i ∂ x′ j ∂ x′ k ∂x p ∂x q ∂x r

G ′ ε i′j k !

(8.5-1)

Using the definition of ε i′j k as given by equation (1.17-42) for an orthonormal coordinate system, we have:

ε!p q r = G ε p q r = J G ′ ε p q r = J ε p q r !

(8.5-6)

Using equation (8.5-5) we then have: ! !

" " " ε!p q r = G ε p q r = J ε p q r = ep × eq • er !

(8.5-7)

Equation (8.5-5) can be expressed in terms of reciprocal

bases and the reciprocal permutation tensor: 264

(

) (

) (

)

! ! ! g i p g j q g k r ε!p q r = g i p ep × g j q eq • g k r er !

!

(8.5-8)

!

or

" " " ε! i j k = e i × e j • e k !

!

(8.5-9)

This equation can be considered a definition of reciprocal permutation tensors. From equations (8.4-7), (8.3-4), and (8.5-3), we can write: 1

1 ε! i j k = ε ij k = ε ij k = ε ij k ! J J G′ G

(8.5-10)

Using equation (8.5-9) we have for all curvilinear coordinate systems:

!

1 mjk ! ! ! e j × ek = ε em ! G

(8.5-13)

since, using equation (6.4-1) we then have: !

ε! i j k =

1 mjk i 1 ij k " " ε m j k em • e i = ε δm = ε ! (8.5-14) G G G

1

Therefore, changing the dummy index in equation (8.5-13) and 1

!

!

Equation (8.5-12) is satisfied if:

using equation (8.5-11), we can write: !

1 ij k ! ! ! ! e j × ek = ε ei = ε" i j k ei ! G

(8.5-15)

With equations (8.5-7) and (8.3-6) we can similarly derive: 1

ε! i j k =

G

ε ij k =

1 ij k "i " j " k ε =e ×e •e ! J

(8.5-11)

We can now derive a relation between the reciprocal

! !

! ! ! ! ej × ek = G ε i j k e i = ε"i j k e i !

(8.5-16)

A relation between the permutation and metric tensors can

permutation tensor and reciprocal coordinate bases. This

be obtained using the definition of the permutation tensor

relation will be used to develop expressions in curvilinear

given in equation (8.5-5):

coordinates for the vector product of two vectors (see Section

!

8.6) and for the curl of a vector (see Section 9.2). From equations (8.5-11) and (1.18-8), we have: !

ε!

ij k

(

)

1 ij k " " " = e j × e k • ei = ε ! G

(8.5-12)

(

)(

)

" " " " " " ε!i j k ε!p q r = ei × ej • ek ep × eq • er !

(8.5-17)

and using equation (1.18-49):

!

ε!i j k ε!p q r =

" " ei • ep " " ej • ep " " ek • ep

" " " " ei • eq ei • er " " " " ej • eq ej • er ! " " " " ek • eq ek • er

(8.5-18)

265

From the metric tensor definition given in equation (6.1-4), we can rewrite equation (8.5-18) as:

ε!i j k ε!p q r =

!

!

gi p

gi q

gi r

gj p

gjq

gjr !

gk p

gk q

gk r

ε! i j k ε!p q r =

ε!

!

ij k

ε!p q r =

δ qi

δ ri

δ pj

δ qj

δ rj

!



ij k

εp q r !

δ pi δ qi

δ ri

ε! i j k ε!p q r = δ pj δ qj

δ rj

δ pk δ qk δ rk

(8.5-20)

From equations (8.4-7) and (8.3-6):

Example 8-6

!

ε! i j k ε!p q r =

Show that:

!

!

!

Using equations (8.5-11) and (8.5-5), we have:

ε!

(

G ε p q r = ε i j k ε p q r = δ pj δ qj

δ rj

δ pk δ qk δ rk

)(

From equation (1.18-49):

" " " ep × eq • er

Since the Kronecker delta is a mixed tensor of rank two,

(1.18-35) can be written in the form:

Solution:

!

G

δ ri

using equation (1.17-36) the ε − δ identity given in equation

δ pk δ qk δ rk

" " " ε!p q r = e i × e j • e k

ε ij k

δ pi δ qi

δ ri

ε! i j k ε!p q r = δ pj δ qj δ rj = ε i j k ε p q r

ij k

" " e i • er " " e j • er " " e k • er

From equation (6.4-1):

δ pk δ qk δ rk

δ pi δ qi

" " e i • eq " " e j • eq " " e k • eq

(8.5-19)

As shown in Example 8-6, we also have:

δ pi

" " e i • ep " " e j • ep " " e k • ep

)

εi j k ε k l m = δ il δ jm − δ im δ jl !

(8.5-21)

From equations (8.3-6) and (8.4-7), we then have: !

ε!i j k G

G ε! k l m = δ il δ jm − δ im δ jl !

(8.5-22) 266

and so, using equation (6.4-16), the ε − δ identity can be

!

written:

permutation tensors can be obtained from equations (8.5-7) and

ε!i j k ε! k l m = δ il δ jm − δ im δ jl = gil gjm − gim gjl = εi j k ε k l m !

!

(8.5-23)

We can also write: !

gr k ε! p q r ε! k l m = g l p g m q − g m p g lq !

(8.5-24)

!

g r k ε!pq r ε!k l m = gl p gm q − gm p gl q !

(8.5-25)

Example 8-7

(8.5-11): !

! ! ! ep × eq • er = ε"p q r = G ε p q r !

(8.5-26)

!

1 ij k ! ! ! e i × e j • e k = ε" i j k = ε" ! G

(8.5-27)

Using box notation, we have: !

! ! ! ! ! ! ! ! ! ⎡⎣ ep eq er ⎤⎦ = ⎡⎣ er ep eq ⎤⎦ = ⎡⎣ eq er ep ⎤⎦ = ε"p q r = G ε p q r !

!

1 ij k ! ! ! ! ! ! ! ! ! ⎡⎣ e i e j e k ⎤⎦ = ⎡⎣ e k e i e j ⎤⎦ = ⎡⎣ e j e k e i ⎤⎦ = ε" i j k = ε ! (8.5-29) G

Show that: gr k ε!

!

=g g

klm

lp

mq

−g

mp

g

lq

Taking the absolute value of equations (8.5-26) and (8.5-27):

From equation (8.5-23), we have:

!

! ! ! ep × eq • er = ε"p q r =

G εp q r = G εp q r !

!

! ! ! e i × e j • e k = ε" i j k =

1

klm

=

gil

gjm



gim gjl

Raising indices: g pi gq j gr k ε! p q r ε! k l m = g pi g l p gq j g m q − g pi g m p gq j g lq

!

(

= g p i gq j g l p g m q − g m p g lq

! and so: !

(8.5-28)

Solution:

ε!i j k ε!

!

ε!

pqr

Additional relations between base vectors and the

gr k ε!

pqr

ε!

klm

=g g lp

mq

−g

mp

g

lp

)

G

ε ij k =

1 G

ε ij k !

(8.5-30) (8.5-31)

If p ≠ q ≠ r and i ≠ j ≠ k , from equations (1.17-34) and (1.17-35) and Table 1-3, we have ε p q r = 1 and ε i j k = 1 . Therefore we can write equations (8.5-30) and (8.5-31) as: !

! ! ! ep × eq • er = ε"p q r = G !

( p ≠ q ≠ r )!

(8.5-32)

267

1 ! ! ! e i × e j • e k = ε" i j k = ! G

!

( i ≠ j ≠ k )!

(8.5-33)

We then have:

! ! ! 2 ⎡⎣ ep eq er ⎤⎦ = G !

!

( p ≠ q ≠ r )!

(8.5-34)

!

ε!i j k, l = 0 !

(8.5-39)

!

ε! i j k, l = 0 !

(8.5-40)

Since equations (8.5-39) and (8.5-40) are tensor equations, they are valid for all coordinate systems.

!i ! j ! k 2

1 ⎡⎣ e e e ⎤⎦ = ! G

!

( i ≠ j ≠ k )!

(8.5-35)

Example 8-8 Show that ε!i j k, l = 0 .

as was given in equations (6.2-3) and (6.4-13). From equations (8.5-34) and (4.8-5), we also have:

! ! ! 2 ! ! ! 2 ⎡ ∂r ∂r ∂r ⎤ ! G = ⎡⎣ ep eq er ⎤⎦ = ⎢ p q r ⎣ ∂x ∂x ∂x ⎥⎦

! !

Solution: From equation (7.5-1) we have:

( p ≠ q ≠ r )! (8.5-36)

The covariant derivative of the permutation tensor and the

!

covariant derivative of the reciprocal permutation tensor can be

ε! i j k, l =

∂ε! i j k ∂x

l

− Γ irl ε! rj k − Γ jrl ε!i r k − Γ krl ε!i j r

determined by considering these tensors in a rectangular

We see that we will have ε! i j k, l = 0 if any two of the indices

coordinate system and using equations (4.6-9) and (4.6-10).

i, j , or k are equal. Using equation (8.3-6) we can write:

Equations (8.5-5) and (8.5-9) then become: !

ε!i j k = eˆi × eˆj • eˆk = ±1 !

(8.5-37)

!

ε! i j k = eˆ i × eˆ j • eˆ k = ± 1 !

(8.5-38)

!

ε! i j k, l =



(

G εij k ∂x

l

)−Γ

r il

G ε rj k − Γ jrl G ε i r k − Γ krl G ε i j r

Letting i = 1 , j = 2 , and k = 3 :

The plus sign in these equations corresponds to a right-handed coordinate system and the minus sign corresponds to a lefthanded coordinate system. Taking the covariant derivative, we

!

ε!123, l =



(

G ε 123 ∂x

l

)−Γ

r 1l

G ε r 2 3 − Γ 2lr G ε 1r 3 − Γ 3lr G ε 12 r

Therefore the only nonzero terms are:

have: 268

ε!123, l =

!



(

G ε 123 ∂x l

)−Γ

1 1l

! ! 1 ! ! A × B = ai bj ε" i j k ek = ε i j k ai bj ek ! G

!

G ε 12 3 − Γ 2l2 G ε 12 3 − Γ 3l3 G ε 12 3

Example 8-9   ! If A , B , and C are point vectors, express the scalar triple    product A × B • C in terms of contravariant and covariant components for a curvilinear coordinate system.

or

ε!123, l =

!



( G)−Γ ∂x

l

1 1l

G − Γ 2l2 G − Γ 3l3 G =



( G)−Γ ∂x

l

p pl

G

From equation (7.3-15) we then have: !

ε!123, l =



( )− G

∂x

l

1 G



( ) G

∂x l

Solution:

G =0

From equation (8.6-2) we have:

!

! ! ! ! A × B = a i b j ε"i j k e k = ai bj ε" i j k ek ! ! ! ! ! ! ! A × B • C = a i b j ε"i j k e k • cl el = ai bj ε" i j k ek • cl e l

!

! ! ! ! = a i b j cl ε!i j k e k • el = ai bj cl ε! i j k ek • e l

!

8.6!

VECTOR PRODUCT IN CURVILINEAR COORDINATE SYSTEMS

  ! The vector product of any two point vectors A and B in a curvilinear coordinate system can be written as: ! ! ! ! ! ! ! A × B = a i ei × b j ej = a i b j ei × ej ! (8.6-1) Using equation (8.5-16) we then have: ! ! ! ! ! A × B = a i b j ε"i j k e k = ε i j k a i b j G e k !

(8.6-2)   If we express the vector product of two point vectors A and B in a curvilinear coordinate system as: ! ! ! ! ! ! ! A × B = ai e i × bj e j = ai bj e i × e j ! then we can use equation (8.5-15) to obtain:

(8.6-4)

(8.6-3)

= a i b j cl ε!i j k δ lk = ai bj cl ε! i j k δ kl

!

and so finally: ! ! ! ! A × B • C = a i b j c k ε"i j k = ai bj ck ε" i j k

8.7! !

AREA AND VOLUME ELEMENTS IN CURVILINEAR COORDINATE SYSTEMS We will now consider how area and volume elements can

(

)

be represented in a curvilinear coordinate system x1, x 2 , x 3 . 269

We will begin by recalling that the point vector components whose scale dimensions correspond to physical measurements of length are physical components (see Sections 4.10 and 4.12).

!

dS1 =

( e!2 • e!2 ) • ( e!3 • e!3 ) − ( e!2 • e!3 ) • ( e!3 • e!2 ) dx 2 dx 3 !

(8.7-4)

From equations (6.1-4) and (6.1-6), we have: dS1 =

g22 g33 − g23 g23 dx 2 dx 3 !

These components are associated with unit base vectors that are

!

tangent to the coordinate curves. The differential elements of

A differential area element in a curvilinear coordinate system

1

2

3

length along the three coordinate curves x , x , and x are then 1

2

(8.7-5)

is then given by:

3

given by h1 dx , h2 dx , and h3 dx , respectively, as is shown in Example 4-12 and Figure C-1 of appendix C.

!

dSi = gj j gk k − gj k g j k dx j dx k ! (i ≠ j ≠ k , no sum)! (8.7-6)

!

8.7.1! !

A slightly different form for the area differential dS1 can be developed by using equations (8.7-2) and (8.5-16) to obtain:

AREA ELEMENTS

A differential element of area within a coordinate surface

for which x1 is a constant is written dS1 and is given by the magnitude of the vector product which has the shape of a parallelogram: !

dS1 = h2 dx eˆ2 × h3 dx eˆ3 = h2 eˆ2 × h3 eˆ3 dx dx ! 2

3

2

3

(8.7-1)

or dS1 =

( e!2 × e!3 ) • ( e!2 × e!3 ) dx 2 dx 3 !

From the vector relation given in equation (1.19-7), we can

! G ε123 e 1 dx 2 dx 3 !

(8.7-7)

! ! ! dS1 = G e 1 dx 2 dx 3 = G e 1 • e 1 dx 2 dx 3 !

(8.7-8)

or using equation (6.4-11): !

dS1 = G g11 dx 2 dx 3 !

(8.7-9)

dSi = G g i i dx j dx k !

(i ≠ j ≠ k , no sum)! (8.7-10)

and so: !

(8.7-3)

dS1 =

Since ε 123 = 1 , we have: !

We can rewrite this expression using equation (4.10-3): ! ! ! dS1 = e2 × e3 dx 2 dx 3 ! (8.7-2)

!

!

is the differential area element in a curvilinear coordinate system.

write: 270

Example 8-10 Determine the differential area element dS for a surface S ! described by the position vector r ( u,v ) . Solution:

S (see Section 2.14). From equation (8.7-2) we can then ! ! dS = eu × ev du dv

( i ≠ j ≠ k , no sum)! (8.7-14)

dV = G dx 1 dx 2 dx 3 !

(8.7-15)

(4.13-1), and (4.13-2), we can write equation (8.7-15) in the form:

dV = J G ′ dx1 dx 2 dx 3 = G ′ dx′1 dx′ 2 dx′ 3 = dV ′ ! (8.7-16)

!

and so the differential volume element dV is a scalar. This result is also evident from equation (8.7-13).

8.8!

VOLUME ELEMENTS

A differential element of volume dV in a curvilinear

coordinate system is given by:

!

dV = ε!i j k dx i dx j dx k !

in a curvilinear coordinate system. Using equations (8.3-4),

as was given in equation (2.14-3).

!

From equation (8.5-32) we then have:

!

Therefore: ! ! ∂r ∂r ! dS = × du dv ∂u ∂v

!

dV = hi dx eˆi × hj dx eˆj • hk dx eˆk i

( i ≠ j ≠ k , no sum)! (8.7-13)

or

From equation (4.8-5) we have: ! ! ! ∂r ! ∂r eu = ev = ! ! ∂u ∂v

!

! ! ! dV = ei × ej • ek dx i dx j dx k !

!

write:

8.7.2!

or !

The parameters u and v serve as coordinates on the surface

!

We can rewrite this expression using equation (4.10-3): ! ! ! dV = dx i ei × dx j ej • dx k ek ! ( i ≠ j ≠ k , no sum)! (8.7-12) !

j

k

( i ≠ j ≠ k , no sum)!

(8.7-11)

AREA AND VOLUME ELEMENTS IN ORTHOGONAL CURVILINEAR COORDINATE SYSTEMS For an orthogonal curvilinear coordinate system, the

differential area element dSi can be determined using equation (8.7-6): 271

dSi = gj j gk k dx j dx k !

!

( i ≠ j ≠ k , no sum)! (8.8-1)

hφ = R !

hR = 1 !

!

hz = 1

where we have used equation (6.1-7). In terms of the scale

From equation (8.8-4) we then have the differential volume

factors given in equation (6.9-2), the equation of an area

element dV in a cylindrical coordinate system:

element in an orthogonal curvilinear coordinate system can be written: ! !

From Table 4-2, the scale factors in a spherical coordinate dSi = hj hk dx dx ! j

k

( i ≠ j ≠ k , no sum)! (8.8-2)

For an orthogonal curvilinear coordinate system, the

system are:

hφ = R sin θ

From equation (8.8-4) we then have the differential volume

where G is given by equation (6.6-3): dV = g11 g22 g33 dx1 dx 2 dx 3 !

hθ = R !

hR = 1 !

!

volume element dV can be determined using equation (8.7-15),

!

dV = R dR dφ dz

!

element dV in a spherical coordinate system: (8.8-3)

!

(8.8-4)

8.9!

dV = ( R ) sin θ dR dθ dφ 2

In terms of scale factors, we have: !

dV = h1 h2 h3 dx1 dx 2 dx 3 !

dV = dx1 dx 2 dx 3 !

2! A tensor Tqp1 1q2p! is termed a relative tensor of weight w if the coordinate system transformation law that it follows is:

!

For an orthonormal coordinate system, we then have: !

RELATIVE TENSORS

(8.8-5)

Example 8-11

w

i2 ! ! Tj1′ ij12 !

Determine the differential volume element dV in both the

⎡ ⎧ ∂x j ⎫ ⎤ ∂ x ′ i1 ∂ x ′ i2 ∂x q1 ∂x q2 p1 p2 ! ! = ⎢ det ⎨ i ⎬ ⎥ p1 p2 ! j1 j 2 ! Tq1 q2 ! (8.9-1) ∂ x′ ∂ x′ ⎣ ⎩ ∂ x ′ ⎭ ⎦ ∂x ∂x

cylindrical and spherical coordinate systems.

Equation (8.9-1) can be compared to equation (5.1-9). From

Solution:

equation (4.14-3), we have:

From Table 4-2, the scale factors in a cylindrical coordinate

!

system are:

i2 ! Tj1′ ji21! = [ J ′]

w

∂ x ′ i1 ∂ x ′ i2 ∂x q1 ∂x q2 p1 p2 ! ! p1 p2 j1 j 2 ! Tq1 q2 ! ! ∂x ∂x ∂ x′ ∂ x′

(8.9-2) 272

and from equation (4.14-8), we have: !

i2 ! Tj1′ ji21! =

Example 8-12

∂x ∂x 1 ∂ x′ ∂ x′ p1 p2 ! (8.9-3) w p1 p2 ! j1 j 2 ! Tq1 q2 ! ! ∂ x′ ∂ x′ [ J ] ∂x ∂x i1

i2

q1

q2

Show that the following are relative tensors and determine their weight:

or !

a.! G i2 ! Tj1′ ji21! = [J ]

−w

∂ x ′ i1 ∂ x ′ i2 ∂x q1 ∂x q2 2!! ! ! Tq1p1q2p! (8.9-4) ∂x p1 ∂x p2 ∂ x ′ j1 ∂ x ′ j2

where J and J ′ are the Jacobians given in equations (4.13-2)

b.!

G

c.!

εij k

and (4.14-3), respectively. The weight w is always an integer.

d.! ε i j k

Tensors for which w = 0 are referred to simply as tensors or as

Solution:

absolute tensors. A relative tensor of weight w = +1 is called a

a.! From equation (8.3-3) we have the transformation law

tensor density and a relative tensor of weight w = −1 is called a

for G :

tensor capacity. A relative tensor of rank one is a relative vector and a relative tensor of rank zero is a relative scalar. !

G = ( J ′) G 2

G′ =

!

Therefore G is a relative scalar of weight w = +2 .

Relative tensors of the same kind (rank and variant) and

weight may be added and subtracted, with the result being a

1

!

(J )

2

relative tensor of the same kind and weight. Relative tensors b.! From equation (8.3-3) we have the transformation law

may also be multiplied (inner or outer products), with the

for

weight of the product being the sum of the weights of the relative tensors that are factors. Contraction of a relative tensor results in a relative tensor having the same weight as the original relative tensor.

! !

G:

G′ =

1 G = J′ G J

Therefore G is a relative scalar of weight w = +1. c.! From equation (8.3-2) we have: 273

!

∂x i ∂x j ∂x k J ′ ε p′ q r = εij k ∂ x′ p ∂ x′ q ∂ x′ r

Example 8-13

!

or

If a i j and bk l are relative tensors of weights w1 and w2 , respectively, show that the outer product a i j bk l is a relative

∂x i ∂x j ∂x k εij k ∂ x′ p ∂ x′ q ∂ x′ r

!

ε p′ q r = [ J ′ ]

!

Therefore ε i j k is a relative rank three tensor of weight

−1

tensor of weight w1 + w2 . Solution:

w = −1 . Note that the tensor ε! i j k is an absolute tensor. !

a′ p q = [ J ′ ]

!

brs ′ = [ J ′]

w1

d.! Since ε! i j k is a rank three absolute tensor, we can write: ∂ x′ ∂ x′ ∂ x′ i j k ε! ∂x i ∂x j ∂x k p

q

r

!

ε! ′ p q r =

!

From equation (8.4-7) we have:

! ! ! !

1 G′

ε′ pqr =

∂ x′ ∂ x′ ∂ x′ ∂x i ∂x j ∂x k p

q

r

1 G

ε ij k

∂ x′ p ∂ x′ q ∂ x′ r i j k ε ∂x i ∂x j ∂x k

Therefore ε i j k is a relative rank three tensor of weight w = +1.

∂x k ∂x l b ∂ x′ r ∂ x′ s k l

and so: a′ pq br′ s = [ J ′ ]

!

w1

[ J ′]w

2

∂ x ′ p ∂ x ′ q ∂x k ∂x l i j a bk l ∂x i ∂x j ∂ x ′ r ∂ x ′ s

or

Using equation (8.3-4), we have the transformation law:

ε′ pqr = J′

w2

∂ x′ p ∂ x′ q i j a ∂x i ∂x j

!

a′ p q br′ s = [ J ′ ]

w1+w2

∂ x ′ p ∂ x ′ q ∂x k ∂x l i j a bk l ∂x i ∂x j ∂ x ′ r ∂ x ′ s

Therefore a i j bk l transforms as and so is a relative tensor of weight w1 + w2 .

274

8.10! !

COVARIANT DERIVATIVE OF A RELATIVE TENSOR

Compared to covariant derivatives of absolute tensors,

covariant derivatives of relative tensors have an additional

!

!

σ ′ = ( J ′) σ ! w

(8.10-1)

Since the covariant derivative of a scalar is a covariant point vector, the transformation law for the covariant derivative of a relative scalar must have the form: !

σ,′m = ( J ′ )

w

To determine σ,′m

∂x k σ, k ! ∂ x′ m

(8.10-2)

we begin by taking the derivative of

equation (8.10-1) with respect to x ′ m : !

k ∂σ ′ ∂σ w ∂x w−1 ∂ J ′ = J σ! ′ ( ) m m k + w( J ′) ∂ x′ ∂ x ′ ∂x ∂ x′ m

Using equation (4.15-8) we can write: !

∂ 2x k ∂ x ′ r ∂J ′ = J ′! ∂ x ′ m ∂ x ′ m ∂ x ′ r ∂x k

!

!

k ∂2 x k ∂ x′ r r q ∂x r ! m r k = Γ ′r m − δ r m Γq k ∂ x ′ ∂ x ′ ∂x ∂ x′

(8.10-7)

∂2 x k ∂ x′ r ∂x k r r = Γ − Γr k ! ′ r m ∂ x ′ m ∂ x ′ r ∂x k ∂ x′ m

(8.10-8)

Equation (8.10-5) can then be written: !

k ∂σ ′ ∂σ w ∂x w r = J ′ ( ) m m k + w ( J ′ ) Γ ′r m σ ∂ x′ ∂ x ′ ∂x

− w ( J ′)

w

∂x k r Γr k σ ! ∂ x′ m

(8.10-9)

or !

(8.10-4)

(8.10-6)

and so we have:

! (8.10-3)

∂2 x k ∂ x′ r ∂ x ′ p ∂x q ∂x k r r = Γ − Γq k ! ′ rm ∂ x ′ m ∂ x ′ r ∂x k ∂x r ∂ x ′ p ∂ x ′ m

or

law for σ is given by equation (8.9-2): !

(8.10-5)

We can rewrite equation (7.2-25) as:

term with the weight w as a factor. We will now show that this is the case for a relative scalar σ . The coordinate transformation

k ∂ 2x k ∂ x ′ r ∂σ ′ ∂σ w ∂x w = ( J ′) + w( J ′) σ! ∂ x′ m ∂ x ′ m ∂x k ∂ x ′ m ∂ x ′ r ∂x k

k ∂σ ′ w w ∂x r − w J Γ σ = J ′ ′ ′ ( ) rm ( ) ∂ x′ m ∂ x′ m

⎡ ∂σ ⎤ r ⎢⎣ ∂x k − w Γ r k σ ⎥⎦ !

(8.10-10)

Using equation (8.10-1) we have:

and so equation (8.10-3) becomes: 275

!

w ∂x k ⎡ ∂σ ′ ⎤ r ⎢⎣ ∂ x ′ m − w Γ ′r m σ ′ ⎥⎦ = ( J ′ ) ∂ x ′ m

! ! tensor, but having one additional covariant index. If A and B

⎡ ∂σ ⎤ r ⎢⎣ ∂x k − w Γ r k σ ⎥⎦ ! (8.10-11)

are relative tensors, the covariant derivatives of these tensors

From equation (8.10-2) we see that equation (8.10-11) has the correct form for the transformation law of a covariant derivative of a relative scalar σ provided the covariant derivative of σ is given by the equation:

σ, k =

! !

obey the following rules:

!

∂σ − w Γ rrk σ ! ∂x k

(8.10-12)

(A

!

(A

a relative vector V j and of a relative tensor of rank two Ti j are V ,j k =

Ti,j k

!

=

∂V j + Γrkj V r − w Γrrk V j ! ∂x k

∂Ti j ∂x

k

(8.10-13)

2! Tq1p1q2p!, k =

relative

tensor

having

(8.10-16)

! r1 r2 ! p1 p2 ! r1 r2 ! = Aqp11qp22!, k B s1 s2 ! + A q1 q2 ! B s1 s2 !, k ! (8.10-17)

tensors, we can write: !

G, k = ( G ) , k = ( G ) ,2k = ( G )

!

ε i j k, m = ε i ,j km = 0 !

−1

1

− 12 ,k

= 0!

(8.10-18) (8.10-19)

a.! G, k = 0

(8.10-15)

A covariant derivative of a relative tensor results in

another

! p1 p2 ! = Aqp11qp22!, k + B q1 q2 !, k !

Show that:

! ! + Γrpk1 Tqr1 qp22! + Γrpk2 Tqp11qr2 ! !

2! ! − Γqr1 k Tr pq12p!2! − Γqr2 k Tq1p1r p!2! ! − w Γrrk Tq1p1q2p!

! !

∂x

k

,k

Example 8-14

(8.10-14)

The covariant derivative of a general relative tensor is then: !

)

as shown in Examples 8-14 and 8-15.

+ Γrkj Ti r − Γi rk Tr j − w Γrrk Ti j !

2! ∂Tq1p1q2p!

Bqp11qp22!!

Using the expressions for covariant derivatives of relative

given by: !

)

p1 p2 ! r1 r2 ! q1 q2 ! B s1 s2 ! , k

!

Similarly it can be shown that the covariant derivatives of

p1 p2 ! q1 q2 ! +

the

same

number

of

b.!

(G )−1,k = 0

c.!

G, k = 0

Solution:

contravariant indices and the same weight as the original 276

a.! From Example 8-12, we know that G is a relative scalar of weight w = 2 . Using equation (8.10-12), we have: !

G, k =

∂G ∂G r r k − w Γr k G = k − 2 Γr k G ∂x ∂x

!

From equation (7.3-15), we can write:

!

∂G r k = 2 Γr k G ∂x

!

Therefore

!

G, k =

G, k =

! !

or



1 2 1 = (J ) G′ G

!

Therefore ( G )

=−

G − Γ rrk G = Γ rrk G − Γ rrk G = 0

Example 8-15 2 Γ rrk G

=0

ε i j k, m = 0

!

Solution: From equations (6.2-1) and (1.18-21) we can write:

−1

{ }

G = det gl n = ε i j k gi1 g j 2 gk 3

! is a relative scalar of weight w = − 2 and

and so:

so we have from equation (8.10-12):

(G )

1 1 ∂G 2 G ∂x k

Show that:

2 Γ rrk G

!

!

G, k =

!

b.! From equation (8.3-3), we have:

−1 ,k

∂ G 1 1 ∂G r r k − w Γr k G = k − Γr k G 2 G ∂x ∂x

1

( G )2

∂G 1 1 r 1 =− 2 Γ rrk G + 2 Γ rrk = 0 k + 2 Γr k 2 G G ∂x (G )

c.! From Example 8-12 we know that G is a relative scalar of weight w = 1 , and so we can use equation (8.10-12) to write:

G, m = ε i j k, m gi1 g j 2 gk 3 + ε i j k gi1, m g j 2 gk 3 + ε i j k gi1 g j 2, m gk 3

!

+ ε i j k gi1 g j 2, gk 3

!

or using equation (8.10-18) and Ricci’s theorem given in equation (7.6-7): !

0 = ε i j k, m gi1 g j 2 gk 3 Since

277

!

{ }

G = det gl n = ε i j k gi1 g j 2 gk 3 ≠ 0

we must have: !

ε i j k, m = 0

8.11! !

PSEUDOTENSORS

Pseudoscalars and pseudovectors (see Section 1.21) are

pseudotensors of rank zero and rank one, respectively. Pseudotensors of rank two and higher also exist. All pseudotensors depend on the “handedness” of the coordinate system, while true tensors do not. If the “handedness” of a coordinate system changes under a transformation, then a sign change will occur for a true tensor so that no change in orientation of the tensor results from the coordinate system

Table 8-2! Algebra of pseudotensors.

inversion. !

For a pseudotensor, however, no sign change will occur

with coordinate system inversion, and so a pseudotensor changes orientation under such a coordinate system inversion. The algebra of pseudotensors is given in Table 8-2 (remember that an inner product can be considered to be an outer product followed by a contraction). !

! ! A vector product of two polar vectors A and B in a

curvilinear coordinate system can be written using equations (8.6-2) and (8.6-4):

!

! ! ! ! ! ! ! C = A × B = a i b j ε"i j k e k = ai bj ε" i j k ek = ck e k = c k ek !

(8.11-1)

We then have: !

ck = a i b j ε!i j k !

!

c k = ai bj ε! i j k !

!

(8.11-2)

(8.11-3)    From Section 1.21 we know that C = A × B is a

pseudovector, and so we know that ck and c k are rank one ! ! pseudotensors. Since A and B are polar vectors, using 278

equations (8.11-2) and (8.11-3) and Table 8-2 we see that ε! i j k and ε! i j k are pseudotensors of rank three.

8.12! !

Vk =

!

!

V1 =

1 1 ε 231 a23 + ε 321 a32 ) = ( a23 − a32 ) ! ( 2 2

(8.12-5)

!

V2 =

1 1 ε 312 a31 + ε132 a13 ) = ( a31 − a13 ) ! ( 2 2

(8.12-6)

!

V3 =

1 1 ε123 a12 + ε 213 a21 ) = ( a12 − a21 ) ! ( 2 2

(8.12-7)

An antisymmetric tensor of rank two can always be

an antisymmetric tensor, so that from equation (5.6-8) we have:

{a } = {− a } = ij

ji

0

a12

− a31

− a12

0

a23

a31

− a23

0

! We will now determine a pseudovector V : ! ! ! ! ! ! V = V 1 e1 + V 2 e2 + V 3 e3 = V k ek ! such that the components V

k

!

(8.12-1)

(8.12-2)

correspond to the three

!

V 3 = a12 !

(8.12-8)

and so: ! !

! ! ! ! ! ! ! V = V 1 e1 + V 2 e2 + V 3 e3 = a 23 e1 + a 31 e2 + a12 e3 ! (8.12-9) Since ε! i j k is a pseudotensor of rank three, we can see from

pseudovector. Any antisymmetric tensor of rank two can, (8.12-3)

and we will then show that the components V k satisfy equation (8.12-1). Using equations (8.4-7) and (1.17-36), we can write equation (8.12-3) as:

V 2 = a31 !

equation (8.12-3) and Table 8-2 that V k are components of a

can be defined by: G ij k V = ε! ai j ! 2

V 1 = a 23 !

!

To obtain the components V k , we will assume that they

k

Using the antisymmetry that is expressed in equation (8.12-1), we obtain the components:

independent components of the tensor ai j . !

(8.12-4)

We then have:

ANTISYMMETRIC RANK TWO TENSOR AS A PSEUDOVECTOR

represented as a pseudovector. To show this, we will let ai j be

!

1 ij k 1 ε ai j = ε i j k ai j ! 2 2

therefore, be considered to be a pseudovector or an axial vector. Moreover, this pseudovector can be obtained by forming the inner product of the permutation tensor with the antisymmetric tensor using equation (8.12-3). 279

!

We can also consider any pseudovector as a rank two

For equations (8.13-3) and (8.13-4) to be equivalent, we must

antisymmetric tensor. Only in 3 dimensions, however, is it

have:

possible to represent a rank two antisymmetric tensor as a

!

pseudovector or to represent a pseudovector as a rank two antisymmetric tensor.

8.13! !

GRADIENT IN CURVILINEAR COORDINATE SYSTEMS

! ! ∇σ • ei = σ, i !

(8.13-5)

The gradient of σ must then be given by: ! ! ! ∇σ = σ, j e j !

(8.13-6)

so that:

Using the covariant derivative, it is possible to obtain an

!

! ! ! ! ∇σ • ei = σ, j e j • ei = σ, j δ i j = σ, i !

(8.13-7)

expression for the gradient of a scalar field in a curvilinear

The gradient of a scalar σ given in equation (8.13-6) can then be

coordinate system. If σ is a scalar field, the differential of σ is

written as:

given by equations (2.12-5) and (2.12-7): !

dσ =

! ∂σ ! i i dx = ∇σ • d r ! ∂x

! (8.13-1)

From equation (4.8-1) we have in a curvilinear coordinate system: !

! ! d r = dx i ei !

and so equation (8.13-1) can be written: ! ! ! dσ = ∇σ • ei dx i !

(8.13-3)

Rewriting equation (8.13-1) using the covariant derivative (see dσ = σ, i dx i !

(8.13-4)

! ∂ ! ∇= j ej! ∂x

! The components ∂σ ∂x j of ∇σ

(8.13-9) in equation (8.13-7) are

covariant components of a vector (see Section 1.15). We can also obtain the directional derivative of σ in the direction of a unit vector cˆ by using equation (2.12-14) to write: !

Example 7-7), we have dσ in a curvilinear coordinate system: !

(8.13-8)

Therefore, in a curvilinear coordinate system, the gradient ! operator ∇ is given by: !

(8.13-2)

! ∂σ ! j ! grad σ = ∇σ = σ, j e j = e ! ∂x j

!

dσ ! ∂σ ! = ∇σ • cˆ = j e j • cˆ ! dc ∂x

(8.13-10) 280

Example 8-16 ! If σ is a scalar, determine ∇σ in terms of covariant components and physical components for a cylindrical

coordinate system. Solution: From equation (8.13-8), we have: !

!

!

! ∂σ ∇σ = j eˆ j ∂x ! Therefore ∇σ in terms of covariant components for a cylindrical coordinate system is given by: ! ∂σ R ∂σ φ ∂σ z ∇σ = eˆ + eˆ + eˆ ∂R ∂φ ∂z ! From equation (6.11-1), ∇σ in terms of physical components for a cylindrical coordinate system is then:

! 1 ∂σ 1 ∂σ 1 ∂σ ∇σ = eˆR + eˆφ + eˆz hR ∂R hφ ∂φ hz ∂z or using the scale factors for a cylindrical coordinate system obtained from Table 4-2:

!

! ∂σ 1 ∂σ ∂σ ∇σ = eˆR + eˆφ + eˆz ∂R R ∂φ ∂z

281

Chapter 9 Tensor Field Methods

(

i ∂ a G ! 1 div A = ∂x i G

)

282

!

In this chapter we will show that vector methods

presented in earlier chapters for rectilinear coordinate systems

!

! ∂a i div A = i + Γkii a k ! ∂x

(9.1-4)

can also be used in curvilinear coordinate systems. We will also

! where a i and a k are contravariant components of the vector A .

show that some of these vector methods (rank one tensor

This equation can be rewritten in terms of G . From equation

methods) can be extended to apply to higher rank tensors. The

(7.3-15), we have:

Riemann, Ricci, and Einstein tensors will all be described.

9.1! !

!

DIVERGENCE IN CURVILINEAR COORDINATE SYSTEMS

! The divergence of a vector function A in a curvilinear

From equation (7.1-30), we then have: ! ! ! div A = a,i j ei • e j = a,i j δ ij ! ! or !

! div A = a,i i !

(9.1-2)

(9.1-3)

The divergence is invariant to coordinate transformation. !

Using equations (9.1-3) and (7.2-4) we can express the

divergence as:

(

∂ ln G ∂x k

)=

1 ∂

( G )! ∂x k

G

(9.1-5)

Substituting equations (9.1-5) into (9.1-4) and changing the dummy index k to i , we can write:

coordinate system can be obtained from the scalar product of the gradient operator given in equation (8.13-9) and the point ! vector A : ! ! ! ! ∂ !j ! ∂ ! ! j ∂A ! j ∇ • A = div A = j e • A = j A • e = j • e ! (9.1-1) ! ∂x ∂x ∂x

Γkii =

!

(

)

! ∂a i ∂ ln G i div A = i + a! ∂x ∂x i

(9.1-6)

or

( )

! ∂a i 1 ∂ G i div A = i + a! (9.1-7) i ∂x G ∂x ! The divergence of a vector A in a curvilinear coordinate system !

can then be written as: !

(

)

i ! 1 ∂ a G i ! div A = a, i = ∂x i G

(9.1-8)

! where a i are contravariant components of A . To write the ! divergence of A in terms of covariant components in a

283

Therefore, in terms of covariant components, we have:

curvilinear coordinate system, we rewrite equation (9.1-8) in the form: !

(

)

ik ! 1 ∂ g ak G i ! div A = a, i = ∂x i G

(9.1-9)

!

Example 9-1 ! Determine div A in terms of covariant components and in

terms of physical components for a cylindrical coordinate

In terms of physical components, we have from equation (6.11-1):

From equation (9.1-9) we have:

(

! 1 ∂ g ak G div A = ∂x i G ik

)

and using the results of Example 6-10 we have:

!

!

! 1 ∂( R aR ) 1 ∂aφ ∂az div A = + + R ∂R ( R )2 ∂φ ∂z

!

Solution:

!

⎤ ) ⎥⎥ ⎥ ⎥ ⎦

or

system.

!

⎛ aφ ⎞ ⎡ ∂ ⎜⎝ R ⎟⎠ ∂( R a ! 1 ⎢ ∂( R aR ) z div A = ⎢ + + R ⎢ ∂R ∂φ ∂z ⎢ ⎣

gφφ = ( R ) ! 2

gRR = 1 !

gφφ =

g RR = 1 !

{ }

1

G = det gi j = 0 0

0

( R )2 0

1

( R)

2

gzz = 1

0 1

a

=

aR

!

aφ =



!

az =

g zz = 1

!

0 = ( R)

!

2

R

gRR

gφφ az gzz

= aR

=

aφ R

= az

and so: ! 1 ⎡ ∂( R aR ) ∂aφ ∂( R az ) ⎤ div A = ⎢ + + ⎥ R ⎣ ∂R ∂φ ∂z ⎦

! or

284

!

! 1 ∂( R aR ) 1 ∂aφ ∂az div A = + + R ∂R R ∂φ ∂z

9.2!

! The divergence of a vector field B is defined in equation (3.1-5) as: !

!

! ! ∇ • B = lim

ΔV→0

1 ΔV

∫∫

! B • nˆ dS !

(9.1-10)

S

The divergence of a tensor field of rank two T ji can be similarly defined as: !

! The curl of a vector A in a curvilinear coordinate system

can be obtained with the gradient operator given in equation (8.13-9): !

! ! ! ! ! ∂ !j ! ∂A ! j ! j ∂A curl A = ∇ × A = j e × A = − j × e = e × j ! (9.2-1) ∂x ∂x ∂x

Using equation (7.1-30) we have:

! div T = lim j

ΔV→0

1 ΔV

∫∫

Tj i ni dS !

(9.1-11)

!

! ! ! curl A = e j × e k a k, j !

(9.2-2)

S

From equations (9.2-2) and (8.5-15), we then have:

or !

!

CURL IN CURVILINEAR COORDINATE SYSTEMS

! div T = Tj,i i !

(9.1-12)

j

The divergence of a tensor field of rank two is a vector, while the divergence of a vector field is a scalar. The divergence of a tensor field is formed by taking the covariant derivative of a contravariant or mixed component of the tensor field and then contracting any superscript with the index of differentiation. For a tensor

Tki j

, we can form two divergences:

Tk,i ji

and

Tk,i jj

.

!

! ε ijk ! curl A = a k, j ei ! G

(9.2-3)

or !

! ! curl A = ε! i j k a k, j ei !

(9.2-4)

 The i th component of curl A obtained from equation (9.2-3) is:

!

!

( curl A )i =

1 G

(a

k, j − a j, k

)

(i ≠ j ≠ k ) !

(9.2-5)

or using equation (7.2-5):

285

!

!

( curl A )i =

1 ⎧ ⎡ ∂ak ⎤ p ⎨ ⎢ j − Γk j ap ⎥ − ⎦ G ⎩ ⎣ ∂x

⎡ ∂aj ⎤⎫ p − Γ a j k p ⎢ ∂x k ⎥⎬ ⎣ ⎦⎭ (i ≠ j ≠ k ) ! (9.2-6)

With the symmetry of the Christoffel symbols, equation (9.2-6) becomes: !

! ∂a ∂a ( curl A )i = 1 ⎧⎨ ∂x kj − ∂x jk ⎫⎬ G ⎩ ⎭

(i ≠ j ≠ k )!

(9.2-7)

  Using the three components of curl A , we have for curl A in a curvilinear coordinate system:

!

! ε ij k curl A = G

⎧ ∂ak ∂aj ⎫ ! ⎨ j − k ⎬ ei ! ∂x ⎭ ⎩ ∂x

(9.2-8)

! where ak and aj are covariant components of the vector A . To  obtain curl A in terms of contravariant components in a curvilinear coordinate system, we can write equation (9.2-8) in the form: !

9.3! !

LAPLACIAN IN CURVILINEAR COORDINATE SYSTEMS The Laplacian of a scalar function σ in a curvilinear

coordinate system is given by: ! ! ! ∇ 2σ = ∇ • ∇σ = div grad σ !

(9.3-1)

From equations (9.1-9) and (8.13-9), we have: !

∇ 2σ =

∂ ⎡ i k ∂σ g i ∂x k G ∂x ⎢⎣

1

⎤ G ⎥! ⎦

(9.3-2)

! since ∂σ ∂x k are covariant components of ∇σ . ! ! The Laplacian of a vector field A in a curvilinear

coordinate system is calculated using the relation given in equation (3.4-15): ! ! ! ! ! ! ! ! ∇ 2A = ∇ ∇ • A − ∇ × ∇ × A !

(

)

(9.3-3)

together with equations (8.13-9), (9.1-8), and (9.2-8).

! ε ij k curl A = G

(

) (

⎧⎪ ∂ gk p a p ∂ gj p a p − ⎨ j ∂x k ⎪⎩ ∂x

) ⎫⎪ e! ! ⎬ ⎪⎭

i

(9.2-9)

Example 9-2

There is no operation equivalent to the curl for tensors of any

Determine ∇ 2σ in terms of covariant components for a cylindrical coordinate system.

rank other than one. Expressions for the curl of a vector in

Solution:

terms of physical components are given in appendix C.

From equation (9.3-2) we have: 286

∇ 2σ =

!

∂ ⎡ i k ∂σ g i ∂x k G ∂x ⎢⎣

1

!

⎤ G⎥ ⎦

of a vector, Gauss’s theorem given in equation (3.5-2) becomes:

and obtaining G and g ik from Example 9-1:

∇ 2σ =

!

∫∫∫ b

!

1 ∂ ⎡ ∂σ ⎤ 1 ∂ ⎡ 1 ∂σ ⎤ 1 ∂ ⎡ ∂σ ⎤ R + R⎥ + R ⎢ R ∂R ⎢⎣ ∂R ⎥⎦ R ∂ φ ⎣⎢ ( R )2 ∂φ ⎥⎦ R ∂ z ⎢⎣ ∂z ⎥⎦

or

i ,i

dV =

V

∫∫ b n dS ! i

(9.4-1)

i

S

! where b i is a contravariant component of B . From equations (9.1-8) and (8.7-15) for a curvilinear coordinate system, we can also write Gauss’s theorem in the form:

∇ 2σ =

!

9.4! !

Using the expression in equation (9.1-3) for the divergence

∂ 2σ 1 ∂σ 1 ∂ 2σ ∂ 2σ + + + ∂R 2 R ∂ R ( R )2 ∂φ 2 ∂z 2

(

i 1 ∂b G ∂x i G

∫∫∫

!

V

INTEGRAL THEOREMS IN CURVILINEAR COORDINATES

)

G dV =

∫∫

b i ni dS !

(9.4-2)

S

or

(

∂ bi G

∫∫∫

!

∂x

V

The integral theorems presented in Chapter 3 provide

i

) dV =

∫∫

b i ni dS !

(9.4-3)

S

relations for vector fields enclosed on surfaces or within

In terms of covariant components bk , we can use equation

volumes. These relations are given for rectangular coordinate

(9.1-9) to write Gauss’s theorem as:

systems. Since there is no difference between ordinary derivatives

and

covariant

derivatives

for

orthonormal

!

V

coordinate systems, we can rewrite these integral theorems in terms of covariant derivatives. These integral theorems will then be in the form of general tensor equations. As was noted in Section 6.13, if a tensor equation is true when expressed in one

∫∫∫

(

∂ g ik bk G ∂x

i

) dV =

∫∫ g

ik

bk ni dS =

S

∫∫ b n dS ! (9.4-4) k

k

S

For a rank two tensor Gauss’s theorem becomes: !

∫∫∫ T V

i j, j

G dV =

∫∫ T S

i j

ni dS !

(9.4-5)

coordinate system, it is true in all coordinate systems in the

!

same coordinate space.

in terms of covariant derivatives by using equation (9.2-4):

Stokes’s theorem given in equation (3.7-2) can be written 287

∫∫ ε!

!

ij k

S

!

bk, j ni dS =

"∫ b dx C

k

k

!

(9.4-6)

Finally, Green’s second theorem given in equation (3.8-19)

can be written: !

∫∫∫ ( p∇ q − q∇ p) dV = ∫∫ ( p q 2

2

,i

V

S

)

− q p, i n i dS !

(9.4-7)

! ! where λ is a scalar that changes the magnitude of A . If such A vectors do exist, they are called eigenvectors or characteristic ! ! vectors of the tensor B . The directions of the eigenvectors A are called characteristic directions or principal directions of the ! tensor B . The λ values are called eigenvalues, characteristic ! values, or principal values for the tensor B . !

where we have used equations (3.8-16) and (8.13-6) and where

∇ 2q and ∇ 2 p are obtained from equation (9.3-2).

Equation (9.5-2) can be rewritten as:

!

bi j a j − λ ai = bi j a j − λ gi j a j = 0 !

(9.5-3)

(b

(9.5-4)

or

9.5!

EIGENVECTORS AND EIGENVALUES

! ! We will now consider a symmetrical rank two tensor B ! ! ! and a point vector A . The inner product of B and A will ! produce a new point vector C such that:

!

bi j a j = ci !

(9.5-1) ! ! In general, the inner product of a tensor B with a vector A will ! produce a vector C that is rotated to a different direction than ! ! A and has a different magnitude than A . There may be a ! ! number of vectors A , however, that when multiplied by B will ! ! produce a vector C that retains the same direction as A . For ! each such A vector we will then have: !

bi j a = λ ai = ci ! j

(9.5-2)

!

ij

)

− λ gi j a j = 0 !

The eigenvalues λ can be determined from equation (9.5-4). ! Only if λ = 1 will we have the magnitude of C equal to that of ! A for a given eigenvector. ! !

The trivial solution to equation (9.5-4) is that:

aj = 0!

(9.5-5)

The nontrivial solution of equation (9.5-4) requires that the determinant bi j − λ gi j equal zero: !

bi j − λ gi j = 0 !

(9.5-6)

Example 9-3 Show that equation (9.5-6) is invariant to coordinate transformation. 288

Solution:

∂ x′i ∂ x′ j ∂ x′ i ∂ x′ j bi′j − λ p gi′j = 0 ∂x p ∂x q ∂x ∂x q

bi′j − λ gi′j

∂ x′ i =0 ∂x p

b31

b32

b33 − λ

= 0!

(9.5-9)

the three component equations:

a 1 + 4 a 2 = λ a1 !

!

∂ x′ i >0 ∂x p

⎡ 1 4 0 ⎤ ⎢ ⎥ bi j = ⎢ 4 1 0 ⎥ ⎢ 0 0 0 ⎥ ⎣ ⎦

! In a rectangular coordinate system, we have from equation

(6.1-8):

gi j = δ i j !

From equation (9.5-9), we then obtain: (9.5-7)

and so:

1− λ 4 4 1− λ

! bi j − λ δ i j = 0 !

Expanding equation (9.5-8), we have:

0 = λ a3

From equation (9.5-2), we see that:

bi′j − λ gi′j = 0

!

4 a1 + a 2 = λ a 2 !

Solution:

we must have:

!

b23

In a rectangular coordinate system, find the eigenvalues for

2

!

b22 − λ

Example 9-4

Since

!

b21

λ I , λ II , and λ III . 2

!

b13

Equation (9.5-9) is a cubic whose three roots are the eigenvalues

or !

b12

!

Transforming equation (9.5-6), we have: !

b11 − λ

0

(9.5-8)

0

0 0

=0

−λ

or 289

2 2 λ ⎡⎣( 1− λ ) − 16 ⎤⎦ = λ ⎡⎣( λ ) − 2 λ − 15 ⎤⎦ = 0

!

and so:

and so the eigenvalues are:

λ = − 3!

λ

!

λ = 5!

!

Equation (9.5-9) can be written in the form:

!

II

I

(λ )

3

I2 =

! III

=0

− I1 ( λ ) + I 2 λ − I 3 = 0 ! 2

!

(9.5-10)

(

)

1 bi i bk k − bi k bk i ! 2

The coefficient I3 of equation (9.5-10) is given by:

{ }

real symmetric matrix has only real eigenvalues. Since the ! tensor B is given as a symmetric rank two tensor, the matrix in equation (9.5-9) is real symmetric. Therefore the eigenvalues λ

!

I1 = b11 + b22 + b33 = bi i = tr bi j !

(9.5-11)

principal diagonal terms.

!

The coefficient I 2 of equation (9.5-10) is given by:

I2 =

b11

b12

b21 b22

+

b11

b13

+

b31 b33

b22

b23

!

b32

b33

b11

b12

b31

I3 = b12

b22

b23 !

2

b33

(9.5-16)

b33

2

2

!

2

(9.5-17)

and so:

I3 =

1 ε i j k ε p q r bi p bj q bk r ! 6

I3 =

1 bi i bj j bk k + 2bi j bj k bk i − 3bi j bj i bk k ! 6

(9.5-18)

or

I 2 = b22 b33 + b33 b11 + b11 b22 − ( b31 ) − ( b12 ) − ( b23 ) ! 2

(9.5-15)

I 3 = b11 b22 b33 + 2b12 b23 b32 − b11 ( b23 ) − b22 ( b31 ) − b33 ( b12 )

(9.5-12)

or !

!

! !

b23 !

or

where tr is the trace or spur and consists of the sum of the !

b13

! Since B is symmetrical, we also have:

b31 b23

The coefficient I1 of equation (9.5-10) is given by:

b12

b31 b32

are real, and so are the roots of equation (9.5-10). !

b11

I3 = det bi j = b21 b22

!

which is known as the characteristic equation or secular equation for a tensor bi j . From matrix theory, we know that a

(9.5-14)

2

! (9.5-13)

(

)

(9.5-19) 290

From equation (9.5-10) we see that, since the eigenvalues are

Solution: ! ! ! Since T = A ⊗ B , we have:

scalars, the coefficients I1 , I 2 , and I 3 must also be invariants. Example 9-5 Solve the problem given in Example 9-4 using the invariants

I1 , I 2 , and I 3 . Solution:

!

!

{b } ij

or

⎡ 1 4 0 ⎤ ⎢ ⎥ =⎢ 4 1 0 ⎥ ⎢ 0 0 0 ⎥ ⎣ ⎦

I1 = 1+ 1 = 2 !

!

I 2 = 1− 16 = − 15 !

!

(λ )

!

λ ⎡⎣( λ ) − 2 λ − 15 ⎤⎦ = 0

I3 = 0

!

I1 = Ti i = ai bi = 0

!

I2 =

1 1 1 Ti i Tk k − Ti k Tk i = ( − ai bk ak bi ) = ( − ai bi ak bk ) = 0 2 2 2

!

I3 =

1 Ti i Tj j Tk k + 2 Ti j Tj k Tk i − 3 Ti j Tj i Tk k 6

!

=

− 2 ( λ ) − 15 λ = 0 2

2

and so, as in Example 9-4, the eigenvalues are: !

λI = 5 !

λ II = − 3 !

(

)

(

(

)

(

) )

1 1 2 ai bj aj bk ak bi = 2 ai bi aj bj ak bk = 0 6 6

! ! Note that the invariants I1 = I 2 = I 3 = 0 although T ≠ 0 .

λ III = 0

Example 9-6 ! ! If A and B are orthogonal vectors, determine the invariants ! ! ! I1 , I 2 , and I 3 of Ti j if T = A ⊗ B in a rectangular coordinate system.

Ti i = ai bi = 0 Therefore

Therefore, from equation (9.5-10): 3

Ti j = ai bj ! ! Since A and B are orthogonal, we have: ! ! ! A• B = 0 !

!

If we transform to a coordinate system such that the base ! vectors ek are in the directions of the three eigenvectors of bi j , the tensor bi j will be reduced to the components b11 , b22 , and ! b33 . The directions corresponding to the base vectors ek are 291

then known as the principal axes for bi j . Equation (9.5-9) becomes:

! b11 − λ

0

0

0

b22 − λ

0

0

0

b33 − λ

!

I 3 = λ I λ II λ III !

!

For coordinate systems that are not orthonormal, it is still

possible to modify equation (9.5-6) so that we can obtain an = 0!

(9.5-20)

equation of the form of equation (9.5-8). This is done by first multiplying equation (9.5-4) by g ki :

(g

!

or

(b11 − λ ) (b22 − λ ) (b33 − λ ) = 0 !

!

(9.5-21)

Therefore, the three eigenvalues are given by: λ I = b11 , λ II = b22 , and λ = b33 . When referred to principal axes, the number of independent tensor components bi j is reduced to three, which

(b

!

{b } =

!

ij

0 0

0

λ

0

II

0

λ

0

!

(9.5-22)

! I2 = ! !

(

1 I λ + λ II + λ III 2

III

)( λ

I

(9.5-27)

)

{

}

det b jk − λ δ jk = 0 !

(9.5-28)

of the choice of a coordinate system.

I1 = λ + λ + λ !

!

− λ δ jk a j = 0 !

Show that eigenvalues of rank two tensors are independent

III

We can then express the invariants I1 , I 2 , and I 3 in terms of the eigenvalues λ I , λ II , and λ III : II

k j

Example 9-7

!

I

(9.5-26)

bi j − λ g k i gi j a j = 0 !

and so we must have: !

are:

)

ki

We then have:

III

λI

(9.5-25)

Solution:

(

(9.5-23)

)

+ λ II + λ III = λ I λ II + λ II λ III + λ I λ III (9.5-24)

)

For a x1, x 2 , x 3 coordinate system, we have from equation (9.5-27): !

b jk a j = λ δ jk a j

(

)

Transforming to a x ′ 1, x ′ 2 , x ′ 3 coordinate system: 292

∂x k ∂ x ′ q p ∂x j r ∂x k ∂ x ′ q p ∂x j r b a = λ δ q′ a′ ′ ′ q ∂ x ′ p ∂x j ∂ x′ r ∂ x ′ p ∂x j ∂ x′ r

! or

∂x k ∂ x ′ q p r p r =0 p r bq′ a′ − λ δ q′ a′ ∂ x′ ∂ x′

(

)

From equation (5.4-6), we have: ∂x k q bq′ p a′ r − λ δ q′ p a′ r = 0 p δ r′ ∂ x′

(

!

)

or

!

known as a deviator. It is possible to write any rank two tensor

bi j as the sum of a spherical tensor Si j and a deviator Di j . To

!

!

trace (invariant I1) of a tensor of rank two is zero, the tensor is

∂x k p r p r =0 p br′ a′ − λ δ r′ a′ ∂ x′

(

)

show that this is true we will write:

bi j = Si j + Di j !

! where

1 1 bk k δ i j = I1 δ i j ! 3 3

!

Si j =

!

Di j = bi j − Si j = bi j −

(9.6-2)

1 I1 δ i j ! 3

(9.6-3)

We see that equations (9.6-2) and (9.6-3) satisfy equation (9.6-1).

and so we must have:

From equation (9.6-2), we also see that Si j has only main

br′ p a′ r = λ δ r′ p a′ r

diagonal terms and that these terms are all equal. Therefore Si j

Since this equation has the same form as the corresponding

(

1

2

equation in the x , x , x

3

)

coordinate system, eigenvalues

is a spherical tensor. Moreover, Si j is equal to the product of a scalar and an isotropic tensor. Therefore, Si j is also an isotropic tensor.

are independent of the choice of coordinate system.

9.6! !

(9.6-1)

DEVIATORS AND SPHERICAL TENSORS

I1

!

{S } = ij

0

0

0

I1

0

0

0

3

If a tensor has only main diagonal terms, and these terms

are all equal, the tensor is known as a spherical tensor. If the

!

3

I1

!

(9.6-4)

3

We must now determine if Di j as given by equation (9.6-3)

is a deviator. We have: 293

b11 − Di j =

!

I1

b21 b31

! 3

b12 b22 −

I1

b13 3

b32

of rank two can be represented as the sum of a spherical tensor !

b23 b33 −

I1

(9.6-5)

and a deviator. We also see that any tensor of rank two can be represented as the sum of an isotropic and a non-isotropic tensor.

3

The trace of Di j is:

Di j = bi i −

!

Example 9-8

1 1 I1 3 = bi i − bi i 3 = 0 ! 3 3

If ai j is a deviator tensor of rank two, determine the three

(9.6-6)

invariants I1 , I 2 , and I 3 .

Therefore Di j is a deviator tensor. !

In summary, we see from equation (9.6-1) that any tensor

Solution:

The tensor Si j can be represented by a sphere whose

volume changes with the magnitude of I1 . It is for this reason that the tensor Si j is known as a spherical tensor. Since the mathematical representation of a sphere is invariant to coordinate system transformation, it is not surprising that Si j is an isotropic tensor. The spherical tensor can be used to

!

I1 = 0

!

I2 = −

!

I3 =

1 aik ak i 2

1 a i j aj k a k i 3

characterize a change in volume without having a change in shape. !

The deviator can be used to characterize a change of shape

in a tensor ellipsoid without having a change in volume. The deviator is non-spherical. The mathematical representation of a deviator is not invariant to coordinate system transformation. The deviator is, therefore, not an isotropic tensor.

9.7! !

RIEMANN TENSORS We will now consider an arbitrary point vector function Vj

having continuous first and second order derivatives. As we have seen in Section 7.1.2, the notation for the first order covariant derivative of Vj is Vj, k . We have also seen in Section 7.1.2 that the covariant derivative of a tensor is another tensor. Therefore, we can compute second order covariant derivatives 294

of a point vector. We will use the following notation for the

After first changing the dummy index i to r in the third term

second order covariant derivative:

on the right side of both equations (9.7-4) and (9.7-5), we can

!

(V )

j, k , l

≡ Vj, k l !

(9.7-1)

To see if second order covariant differentiation is orderdependent, we can calculate and compare Vj, k l and Vj,

l k . From

equation (7.2-5), we have: !

!

Vj, k = Vj, k l =

∂Vj ∂x

k

∂x l

(9.7-2)

− Γjrl

Vr, k −

Γkrl

Vj, r !

∂ Vj

∂Vi ∂Vr = l k− Vi − Γjik l − Γjrl k ∂x ∂x ∂x ∂x ∂x + Γjrl Γrik Vi − Γkrl

! !

lk

Vj, l k =

∂Vj ∂x

k

+ Γkrl Γjir Vi ! (9.7-4)

we can interchange k and l in equation (9.7-4):

∂ 2Vj ∂x ∂x k

Vi +

Γrik Vi

+

∂Γjil ∂x

k

Vi − Γjrk Γril Vi ! (9.7-6)

we have:

Vj, k l − Vj, l k

(9.7-3)

∂Γjik l

! To obtain Vj,

∂x

l

Γjrl

⎡ ∂Γjil ∂Γjik ⎤ r i r i =⎢ k − + Γ Γ − Γ Γ ⎥ Vi ! (9.7-7) j l r k j k r l l ∂x ∂x ⎢⎣ ⎦⎥

or

2

Vj, k l

∂Γjik

are equal by calculating their difference:

where we have used the symmetry relations for the Christoffel

!

or !

! Vj, k l − Vj, l k = −

lk

symbols given in equation (7.3-1). Rewriting equation (9.7-6),

− Γjik Vi !

∂Vj, k

check if Vj, k l and Vj,

l



∂Γjil ∂x

k

Vi − Γjil

∂Vi r ∂Vr k − Γj k ∂x ∂x l

+ Γjrk Γril Vi − Γl rk

∂Vj ∂x

l

+ Γl rk Γjir Vi ! (9.7-5)

Vj, k l − Vj, l k = R•i j k l Vi !

!

(9.7-8)

where !

R•ji k l

=

∂Γjil ∂x k



∂Γjik ∂x l

+ Γjrl Γrik − Γjrk Γril !

(9.7-9)

This equation can be written in determinant form as:

!

!

R•i j k l

∂ k = ∂x Γjik

∂ ∂x l Γjil

+

Γrik

Γril

Γjrk

Γjrl

!

(9.7-10)

Since the vector Vj is arbitrary, equation (9.7-8) is valid for

all vectors. From the quotient law and equation (9.7-8), we see 295

that R•ji k l is a mixed tensor of rank four. It is termed the Riemann (or Riemann-Christoffel) tensor of the second kind. It is also known as the curvature tensor of the coordinate space. The Riemann tensor consists only of the metric tensor and derivatives of the metric tensor. Therefore the Riemann tensor relates to properties of the coordinate space. !

From equation (9.7-8), we see that second order covariant

differentiation does not commute since R•ji k l ≠ 0 in general. We will show later, however, that second order covariant differentiation does commute for coordinate systems in Euclidean

coordinate

space.

For

contravariant

vector

components, we have: V i, k l − V i, l k = − R•i j k l V j !

!

(9.7-11)

Using the metric tensor to lower an index, we can obtain

the Riemann (or Riemann-Christoffel) tensor of the first kind

Ri j k l : !

Ri j k l =

gis R•sj k l !

(9.7-12)

Note that Ri j k l is the associated covariant tensor of the Riemann

∂x

k

∂Γjsk

− gi s

∂x

l

+ gi s Γjrl Γrsk − gi s Γjrk Γrsl !

(9.7-13)

or !

Ri j k l =

(

∂ gi s Γjsl ∂x

) − ∂g

is k

∂x

k

Γjsl



(

∂ gi s Γjsk ∂x

l

) + ∂g

is l

∂x

Γjsk

+ Γjrl Γr k i − Γjrk Γr l i ! (9.7-14)

!

where we have used the metric tensor to lower the index s of the last two terms on the right side of equation (9.7-13). Consolidating terms, we have: ∂Γj l i ∂x

k



∂Γj k i ∂x

l

∂gi r ∂gi r ⎤ ⎤ r ⎡ r ⎡ + Γj k ⎢ l − Γr l i ⎥ − Γj l ⎢ k − Γr k i ⎥ ! (9.7-15) ⎣ ∂x ⎦ ⎣ ∂x ⎦

where we have lowered the index of the first and third terms and have changed dummy indices of the second and fourth terms on the right side of equation (9.7-14). Using equation (7.3-4), we then have: !

Ri j k l =

∂Γj l i ∂x

k



∂Γj k i ∂x

l

+ Γj k Γ i l r − Γj l Γ i k r ! r

r

(9.7-16)

This equation can be written in determinant form as:

tensor of the second kind (it is also known as the covariant curvature tensor). We have then:

R i j k l = gi s

! Ri j k l =

as is shown in Example 9-9. !

!

∂Γjsl

!

∂ k R i j k l = ∂x Γj k i

∂ ∂x l Γj l i

+

Γjrk

Γjrl

Γi k r

Γi l r

!

(9.7-17)

296

From equations (9.7-16) and (7.2-17), we can write the Riemann

Example 9-9

tensor of the first kind in the form: !

1 Ri j k l = 2

Show that:

⎡ ∂ 2 gi l ∂ 2 gj k ∂ 2 gi k ∂ 2 gj l ⎤ + − − ⎢ j k ⎥ ∂x i ∂x l ∂x j ∂x l ∂xi ∂xk ⎥⎦ ⎢⎣ ∂x ∂x

+ Γj k Γ i l r − Γj l Γ i k r ! r

!

r

V ,i k l − V ,i l k = − R•i j k l V j

!

Solution:

(9.7-18)

!

From equation (9.7-18), the Riemann tensor of the first kind can be seen to have the symmetries: !

R i j k l = − Rj i k l = − Ri j l k = Rk l i j = Rj i l k = Rl k j i !

From equation (9.7-18) it can also be shown that: !

R i j k l + Ri k l j + Ri l j k = 0 !

(

V s, k l − V s, l k = g s j gi r R•i j k l V r = g s j Rr j k l V r = − g s j Rj r k l V r

!

s r V s, k l − V s, l k = − R•r kl V

! (9.7-20)

Equation (9.7-20) is known as Bianchi’s first identity. Using equations (9.7-20) and (9.7-19), other relations for the Riemann

! !

)

g s j Vj, k l − Vj, l k = g s j R•i j k l Vi

! (9.7-19)

Multiplying (9.7-8) by g s j , we have:

Changing indices: V ,i k l − V ,i l k = − R•i j k l V j

tensor can be derived. !

Example 9-10

From equations (9.7-12) and (9.7-19), the Riemann tensor

of the second kind can be seen to antisymmetric with respect to the last two covariant indices: !

R•i j k l

=

− R•i j l k !

! (9.7-21)

Therefore if these last two indices are the same, the Riemann tensor will equal zero.

Show that:

R i j k l = − Rj i k l Solution: From equation (9.7-18) we have:

⎡ ∂2 g ∂2 g ∂2 g ∂2 g ⎤ ! Rj i k l = 1 ⎢ i i l k + j i k l − i j k l − j i l k ⎥ + Γirk Γj l r − Γirl Γj k r 2 ⎢⎣ ∂x ∂x

∂x ∂x

∂x ∂x

∂x ∂x ⎥⎦

297

or !

Rj i k l

1 =− 2

⎡ ∂ 2 gi l ∂ 2gj k ∂ 2 gi k ∂ 2gj l ⎤ − ⎢ j k+ i l− ⎥ ∂x ∂x ∂x j ∂x l ∂x i ∂x k ⎥⎦ ⎢⎣ ∂x ∂x

+ g p r Γi k p Γj l r − g p r Γi l p Γj k r

!

Changing the factor order in the last two terms: !

Rj i k l = −

1 2

⎡ ∂ 2 gi l ∂ 2gj k ∂ 2 gi k ∂ 2gj l ⎤ + − − ⎢ j k ⎥ ∂x i ∂x l ∂x j ∂x l ∂x i ∂x k ⎥⎦ ⎢⎣ ∂x ∂x

+ g p r Γj l r Γi k p − g p r Γj k r Γi l p

!

Rj i k l

1 =− 2

⎡ ∂ 2 gi l ∂ 2gj k ∂ 2 gi k ∂ 2gj l ⎤ − ⎢ j k+ i l− ⎥ ∂x ∂x ∂x j ∂x l ∂x i ∂x k ⎥⎦ ⎢⎣ ∂x ∂x

+ g Γj lp

!

rp

Γi k r − g r p Γj k p Γi l r

RIEMANN TENSORS IN EUCLIDEAN COORDINATE SPACE In rectangular coordinates the Christoffel symbols are all

zero as given by equation (7.2-6) and so the covariant derivative becomes simply the partial derivative. We then have: !

∂ 2Vj ∂x k ∂x l

=

∂ 2Vj ∂x l ∂x k

!

(9.8-1)

or

Vj, k l = Vj, l k !

(9.8-2)

Equation (9.8-2) will be valid in any coordinate system existing in a coordinate space that allows the rectangular coordinate system. By definition, equation (9.8-2) is then valid in Euclidean coordinate space and so, for Euclidean coordinate space, we have from equation (9.7-8):

or ⎡ ∂ 2g ∂ 2g ∂2g ∂ 2g ⎤ ! Rj i k l = − 1 ⎢ j i l k + i j k l − j i k l − i j l k ⎥ + Γjrl Γi k r − Γj rk Γi l r 2 ⎢⎣ ∂x ∂x

∂x ∂x

∂x ∂x

and so from equation (9.7-18): !

!

!

Exchanging the dummy indices p and r : !

9.8!

R i j k l = − Rj i k l

∂x ∂x ⎥⎦

!

R•ji k l = 0 !

(9.8-3)

and from equation (9.7-12): !

Ri j k l = 0 !

(9.8-4)

Therefore in Euclidean coordinate space (flat space), the curvature tensor is zero. Using this fact, the curvature tensor can be used to classify the type of coordinate space. 298

!

Equation (9.8-4) represents conditions on the components

!

Ai j, k l − Ai ,j l k = − R •i m k l A m j − R •j mk l Ai m !

(9.9-2)

!

A •i j, k l − A •i j, l k = R •mj k l A •i m − Rm•i k l A m• j !

(9.9-3)

of the metric tensor for coordinate systems that exist in Euclidean 3-dimensional coordinate space. Although 81 equations result from equation (9.8-4), because of the symmetries specified by equation (9.7-19), only six of these 81 equations are distinct. In particular, the antisymmetry with respect to the indices i and j , and with respect to the indices k and l mean that all components R11k l , R22 k l , R33k l , Ri j11 , Ri j 22 , and Ri j 33 will be zero. The six independent components in 3dimensional coordinate space are: !

R1212

R1223

R1231 R 2323

R 2331 R 3131 !

!

two. Relations such as those given in equations (9.9-1), (9.9-2), and (9.9-3) are called Ricci identities. When the coordinate space is Euclidean, these Ricci identities become: !

A ij, k l = A ij, l k !

(9.9-4)

!

Ai j, k l = Ai j, l k !

(9.9-5)

!

A •i j, k l = A •i j, l k !

(9.9-6)

(9.8-5)

This suggests that the content of the Riemann tensor can be contained in a symmetric rank two tensor since such tensors

Example 9-11

have six independent components (see Section 5.6). The Ricci

Show that:

tensor of the first kind is a symmetric rank two tensor derived from the Riemann tensor (see Section 9.10).

9.9! ! !

!

HIGHER ORDER COVARIANT DERIVATIVES FOR RANK TWO TENSORS

as shown in Example 9-11. We also have:

A ij, k l − A ij, l k = R•mi k l A m j + R•mj k l A i m

Solution: From equation (9.7-8) we have:

! For a tensor A of rank two, equation (9.7-8) becomes:

A ij, k l − A ij, l k = R•mi k l A m j + R•mj k l A i m !

Similar relations obtain for tensors of higher rank than

(9.9-1)

!

Vj, k l − Vj, l k = R•i j k l Vi

Setting !

Vi = V j Ai j 299

where V j is an arbitrary vector, we can write:

(V A ) j

!

ij , kl

(

− V j Ai j

)

, lk

Finally, since V j is arbitrary we must have:

= R•mi k l V j A m j

!

A ij, k l − A ij, l k = R•mi k l A m j + R•mj k l A i m

Now we have:

(V A ) (V A )

! !

j

j ij , k = V , k

j

ij , kl = V

9.10!

Ai j + V j A i j, k

j , kl

Ai j + V j, k

Ai j, l + V

j ,l

!

A i j, k + V Ai j, k l j

(V A ) (V A ) j

j ij , l = V , l

j

!

ij , lk

Ri j k l is antisymmetric in the indices i and j , contraction over only these indices will result in a zero tensor. Similarly, Ri j k l is

Ai j + V j Ai j, l

antisymmetric in the indices k and l and so contraction in these indices will result in a zero tensor. Therefore we will

= V ,j l k Ai j + V ,j l A i j, k + V ,j k A i j, l + V j Ai j, l k

contract the indices j and k . First we form:

Therefore

(V A ) j

!

ij , kl

(

− V j Ai j

)

, lk

(

)

(

)

= V j, k l − V j, l k Ai j + Ai j, k l − Ai j, l k V j

Changing dummy indices from j to m in the first term on the right side: R •im k l V j Am j

!

=

(

V m, k l

− V m, l k

)A

im

(

)

+ Ai j, k l − Ai j, l k V

Using equation (9.7-11): R •im k l V j Am j

!

=−

R •jm k l

(

)

Ai m V + A i j, k l − A i j, l k V j

(

) (

)

j

!

R i •s k l = g s j Ri j k l !

(9.10-1)

Contracting R i •s k l , we have !

Ri l = R i k• k l !

(9.10-2)

We can also contract the indices i and l of Ri j k l : !

R •s j k l = g s i Ri j k l !

(9.10-3)

Rj k = R l• j k l !

(9.10-4)

and j

or !

By contracting the Riemann tensor, another tensor known

as the Ricci tensor can be formed. Since the Riemann tensor

and !

THE RICCI TENSOR

⎡ Ai j, k l − Ai j, l k − R •im k l Am j + R •jm k l Ai m ⎤ V j = 0 ⎣ ⎦

!

The rank two tensors Ri l and Rj k are known as Ricci tensors of the first kind. From equations (9.10-4) and (9.7-9) we can write: 300

Rj k =

!

=

R •jl k l

∂Γjll ∂x

∂Γjlk



k

∂x

l

+ Γjrl Γrlk − Γjrk Γrll !

(9.10-5)

!

(

Rj k =

∂x j ∂x k

) − ∂Γ

l jk l

+

∂x

Γjrl

Γrlk



Γjrk

(

∂ ln G ∂x r

)!

Rj k =

(

∂2 ln G ∂x ∂x j

k

) − ⎡⎢

G

∂Γjlk l

⎢⎣ G ∂x

+

1 ∂ G l ⎤ r l l Γj k ⎥ + Γj l Γr k G ∂x ⎥⎦

Rj k =

(

∂x ∂x j

k

)−

(

)

l 1 ∂ G Γj k + Γjrl Γrlk ! l ∂x G

(9.10-8)

From equation (9.10-8) we see that the Ricci tensor of the first kind is symmetric:

!

Rj k = Rk j !

(9.10-9)

By raising an index of the Ricci tensor of the first kind, we

obtain the Ricci tensor of the second kind: !

In a rectangular coordinate system, the Christoffel

symbols are zero, but their derivatives are not necessarily zero. Therefore we can then use equation (9.7-9) to write: !

or

!

!

THE EINSTEIN TENSOR

(9.10-7)

∂2 ln G

R

m j

=g

mk

Rj k !

(9.10-11)

(9.10-6)

9.11!

!

!

ξ = R mm = g m k Rm k !

!

and so: !

obtain the Ricci curvature, Einstein curvature, or Ricci scalar

ξ:

or, using equation (7.3-15):

∂2 ln G

If we now contract the Ricci tensor of the second kind, we

!

R•ji k l, m

+

R•ji l m, k +

+

R•ji m k, l

∂2 Γjim ∂x l ∂x k



=

∂2 Γjil ∂x k ∂x m

∂2 Γjil ∂x m ∂x k

+



∂2 Γj ik ∂x l ∂x m

∂2 Γ ji k ∂x m ∂x l



∂2 Γjim ∂x k ∂x l

! (9.11-1)

or !

R•i j k l, m + R•i j l m, k + R•i j m k, l = 0 !

(9.11-2)

Equation (9.11-2) is valid in Euclidean coordinate space. This equation is known as Bianchi’s second identity. !

Since the covariant derivative of the metric tensor gis is

zero, we can use gis to rewrite equation (9.11-2) as: (9.10-10)

!

Rs j k l, m + Rs j l m, k + Rs j m k, l = 0 !

(9.11-3)

301

Changing the index s to i and using the antisymmetry of the

G mk = R mk −

!

Riemann tensor of the first kind given in equation (9.7-19), we Rj i l k, m − Rj i l m, k − Ri j k m, l = 0 !

(9.11-4)

Multiplying equation (9.11-4) by g i l g j k and contracting, we can write equation (9.11-4) in terms of the Ricci tensor of the first kind: !

g Rj k, m − g Rj m, k − g Ri m, l = 0 ! jk

il

(9.11-5)

dummy indices: g Rj k, m − jk

k 2 R m, k

= 0!

k ξ , m − 2 R m, k= 0!

where

(9.11-11)

We see from equation (9.11-9) that the divergence of the Einstein tensor can also be written in terms of covariant or contravariant components: !

Gk m = Rk m −

1 gk m ξ ! 2

(9.11-12)

!

Gkm = Rkm −

1 km g ξ! 2

(9.11-13)

9.12! k δ mk ξ , k − 2 R m, k= 0 !

(9.11-8)

and so, in Euclidean coordinate space: !

1 1 ∂ξ ! ξ, m = 2 2 ∂x m

(9.11-7)

or !

k R m, k=

!

(9.11-6)

Using equation (9.10-11), we then have: !

have:

Einstein tensor is zero in Euclidean coordinate space. The jk

or, raising an index of the second and third terms and changing !

(9.11-10)

is known as the Einstein tensor. From equation (9.11-7) we also

have: !

1 k δm ξ ! 2

⎡ k 1 k ⎤ k ! ⎢⎣ R m − 2 δ m ξ ⎥⎦ = G m, k = 0 ,k

(9.11-9)

!

ISOTROPIC TENSORS

In Section 5.4 we noted that all tensors having components

that remain the same in all coordinate systems are termed ! isotropic tensors. If A is an isotropic tensor of any rank, the ! components of A must then satisfy the coordinate system transformation relation: !

i k! k! ! a′j m! = a ji m!

(9.12-1) 302

We can see that the sum or difference of two isotropic tensors is an isotopic tensor. The product of two isotropic tensors is also

!

(

) (

)

λ δij δ kl + α δi k δjl + δil δ j k + β δi k δjl − δil δj k !

an isotropic tensor. Since isotropic tensors are unchanged in

!

any coordinate system transformation including coordinate

rank zero through rank four.

(9.12-5)

Table 9-1 provides a list of isotropic Cartesian tensors from

rotations, we can expect rank two isotropic tensors to be spherical tensors (see Section 9.6). !

By definition all scalars are isotropic tensors of rank zero.

Therefore, the product of a scalar and an isotropic tensor of any rank is an isotropic tensor of that rank. The zero vector is the only isotropic tensor of rank one. All zero tensors of any rank are isotropic tensors. All isotropic tensors of rank two have the form: !

λ δ ji !

(9.12-2)

where λ is a scalar (see Example 9-12). All isotropic Cartesian tensors of rank three have the form: !

λ ε!i j k !

(9.12-3)

Table 9-1! Isotropic Cartesian tensors of ranks 0 to 4. Example 9-12

and all isotropic Cartesian tensors of rank four have the form: !

λ δij δ kl + µ δi k δjl + γ δil δj k !

Show that if ai j is an isotropic Cartesian tensor of rank two,

(9.12-4)

then we must have:

where λ , µ , and γ are scalars (see Example 9-13). If we replace the scalars µ and γ by µ = α + β and γ = α − β where α and β are scalars, we can rewrite equation (9.12-4) as:

!

ai j = λ δ i j where λ is a scalar. 303

!

Solution:

where λ , µ , and γ are scalars.

As shown in Section 9.6, any rank two tensor can be

!

represented as the sum of a spherical tensor and a deviator.

Solution:

Therefore, using equations (9.6-1) and (9.6-2), we have:

An isotropic rank four tensor ai j kl can be represented by the

ai j = Si j + Di j =

1 I1 δ i j + Di j 3

where Si j is a spherical tensor, Di j is a deviator, and I1 is an

sum of the following products of symmetric rank two tensors: !

invariant (scalar). Therefore we can write: !

!

ai j kl = ui j uk l + vi k vj l + wi l wj k These products represent all the possible combinations of

ai j = λ δ i j + Di j where λ = I1 3 is a scalar. Since ai j is an isotropic tensor, we

symmetric rank two tensors that can result in the isotropic rank four tensor ai j kl . Since ai j kl is isotropic, the rank two

must have:

tensors in the products must not only be symmetric, but

Di j = 0

must also be isotropic. Therefore we can use equation (9.12-2) to write:

because Di j ≠ 0 is a non-isotropic tensor. Therefore we obtain finally: !

ai j kl = λ δ i j δ k l + µ δ i k δ j l + γ δ i l δ j k

!

(

)(

) (

)(

) (

)(

ai j kl = α δ i j β δ k l + ηδ i k θ δ j l + τ δ i l φδ j k

)

where α , β , η , θ , τ , and φ are all scalars. Letting λ = α β ,

ai j = λ δ i j

µ = ηθ , and γ = τ φ , we have: Example 9-13

!

ai j kl = λ δ i j δ k l + µ δ i k δ j l + γ δ i l δ j k

Show that if ai j kl is a isotropic Cartesian tensor of rank four, then we must have:

304

Appendix A

THE GREEK ALPHABET

Iota!

ι!

Ι

Kappa!

κ!

Κ

Lambda!

λ!

Λ

Mu!

µ!

Μ

Nu!

ν!

Ν

Xi!

ξ!

Ξ

Omicron!

ο!

Ο

Pi!

π!

Π

Rho!

ρ!

Ρ

Sigma!

σ!

Σ

Tau!

τ!

Τ

Upsilon!

υ!

ϒ

Alpha!

α!

Α

Phi!

φ, ϕ !

Φ

Beta!

β!

Β

Chi!

χ!

Χ

Gamma!

γ!

Γ

Psi!

ψ!

Ψ

Delta!

δ!

Δ

Omega!

Ε

Ω

Epsilon!

ε!

ω!

Zeta!

ζ!

Ζ

Eta!

η!

Η

Theta!

θ!

Θ 305

! The entities σ and ψ are assumed to be known. If the ! ! potentials ϕ and A can be determined, the vector field B can

Appendix B

then be obtained from Helmholtz’ partition theorem given in equation (3.11-1): ! ! ! ! ! B = ∇ϕ + ∇ × A !

(B-3)

We will also demonstrate in this appendix the uniqueness, ! within a volume V , of the vector field B derived from ! Helmholtz’ partition theorem. The vector field B is assumed to be specified over the surface S enclosing V .

VECTOR FIELD OPERATIONS

B.1! !

SOURCE/SINK POINTS AND FIELD POINTS

A source/sink point for a vector field is a point at which is

In this Appendix we will obtain solutions for the scalar

located a source or a sink of the vector field. A field point is any

and vector Poisson’s equations derived in Section 3.10. We will

point in the vector field at which a source or a sink of the field

do this by considering vector fields in terms of source/sink

does not exist. The field that exists at a field point, which may

points and field points. Poisson’s equations are: ! ! ∇ 2ϕ = ± 4 π σ !

be located some distance from the source/sink point, is a direct

!

(B-1)

! ! ! (B-2) ∇ 2A = ± 4 π ψ ! ! where ϕ and A are scalar and vector potentials, respectively, ! and where σ is a scalar source or sink and ψ is a vector vortex. !

result of the existence of the source or sink at the source/sink point. !

Vector operations at the field point will now be calculated

using two different coordinate systems: one coordinate system will have its origin at the field point, and the other coordinate system will have its origin at the source/sink point. The 306

subscript on a term will indicate which of the coordinate

!

systems the term is with respect to. For example, gradient operations at the field point with respect to the field ! coordinates will be written using the operator ∇f , while gradient operations at the field point with respect to the ! source/sink coordinates will be written using the operator ∇s . ! ! If r is the position vector of the field point with respect to source/sink coordinates, at the field point we then have: ! ! ! ∇f r = ∇s r ! (B.1-1) since the gradient of r is independent of coordinate system (the gradient is a point vector). For a scalar field ϕ , we can then use

!

! ! ∂ϕ ! ∂ϕ ! ∇f ϕ = ∇f r = ∇s r = ∇s ϕ ! ∂r ∂r

(B.1-2)

(B.1-5)

! ! ∇f 2ϕ = ∇s 2ϕ !

(B.1-6)

or !

B.2! !

DIRAC DELTA FUNCTION

We will now examine the integral:

∫∫∫

⎡1 ⎤ ∇ 2 ⎢ ⎥ dV ! ⎣r ⎦ V

!

(B.2-1)

If r ≠ 0 , we have the vector relation given in equation (3.4-27): ⎡1 ⎤ ! ! ⎡1 ⎤ ∇ 2 ⎢ ⎥ = ∇ • ∇ ⎢ ⎥ = 0! ⎣r ⎦ ⎣r ⎦

!

equation (B.1-1) to write:

! ! ! ! ! ! ∇f • ∇f ϕ = ∇s • ∇f ϕ = ∇s • ∇s ϕ !

(B.2-2)

To evaluate equation (B.2-1) when r = 0 , we use Gauss’s

! If D (ϕ ) is a vector field that is a function of the scalar field ϕ , using the vector relation given in equation (3.1-26), we have: ! ! ! dD ! ! ∇• D = • ∇ϕ ! (B.1-3) dϕ

theorem given in equation (3.5-1) to obtain:

Equation (B.1-1) then gives: ! ! ! ! dD ! ! ! dD ! ! ∇f • D = • ∇f ϕ = • ∇s ϕ = ∇s • D ! dϕ dϕ

gradient operator for spherical coordinates:

From equations (B.1-4) and (B.1-2), we can write:

(B.1-4)

!

∫∫∫

⎡1 ⎤ ∇ 2 ⎢ ⎥ dV = ⎣r ⎦ V

! ! ⎡1 ⎤ ∇ • ∇ ⎢ ⎥ dV = ⎣r ⎦ V

∫∫∫

! ⎡1 ⎤ ! ∇ ⎢ ⎥ • dS ! (B.2-3) ⎣r ⎦ S

∫∫

where the surface S is taken to be a small sphere surrounding the point r = 0 . Integrating over the sphere and using the

!

! ⎡1 ⎤ ∇ ⎢ ⎥ • eˆr dS = − S ⎣r ⎦

∫∫



∫ ∫ 0

π

0

1 2 r sin θ dθ dφ ! r2

(B.2-4)

or 307

! ⎡1 ⎤ ∇ ⎢ ⎥ • eˆr dS = − 4 π ! S ⎣r ⎦

∫∫

!

(B.2-5)

If we now define the Dirac delta function δ to have the following characteristics:

∫∫∫ δ (r ) dV = 1!

!

V

( r = 0 somewhere in V )!

(B.2-6)

and

σ dV ! r

(B.3-1)

! ψ dV ! V r

(B.3-2)

∫∫∫

ϕ=±

V

!

! A=±

∫∫∫

are the solutions that we seek. We will also assume that the ! field created by σ and ψ decreases sufficiently rapidly at large distances from the source/sink points so that the integrals in

∫∫∫ ϕ (r ) δ (r − r ) dV = ϕ (r ) !

!

0

0

(B.2-7)

V

we then have from equations (B.2-3), (B.2-5), and (B.2-6): ⎡1 ⎤ ∇ 2 ⎢ ⎥ = − 4 π δ (r ) ! ⎣r ⎦

!

Using equation (2.12-40), we can also write: ! ! ⎡ 1 ⎤ ! ⎡ r! ⎤ ! ⎡ rˆ ⎤ 2 ⎡1 ⎤ ! ∇ ⎢ ⎥ = ∇ • ∇ ⎢ ⎥ = ∇ • ⎢− 3 ⎥ = ∇ • ⎢− 2 ⎥ ! ⎣r ⎦ ⎣r ⎦ ⎣ r ⎦ ⎣ r ⎦ and so from equations (B.2-9) and (B.2-8): ! ⎡ r! ⎤ ! ⎡ rˆ ⎤ ! ∇ • ⎢ 3 ⎥ = ∇ • ⎢ 2 ⎥ = 4 π δ (r )! ⎣r ⎦ ⎣r ⎦

B.3! !

!

that these two equations provide the solutions to equations (B-1) and (B-2). We will begin by substituting the potential given by equation (B.3-1) into equation (B-1):

(B.2-8)

(B.2-9)

! ∇ 2ϕ = ± ∇f

∫∫∫

⎡σ ⎢⎣ r

⎤ ⎥⎦ dV = ± V ! Equation (B.3-3) is valid since ∇f

!

! ⎡1 ⎤ σ ∇f ⎢ ⎥ dV ! ⎣r ⎦ V

∫∫∫

and ∇f2

(B.3-3)

operate on field

coordinates, and so these operators commute with integration over source coordinates (see Section B.1). We have using equations (B.1-6) and (B.2-8):

(B.2-10)

SOLUTION TO POISSON’S EQUATION

To establish solutions to equations (B-1) and (B-2), we

begin by assuming that:

equations (B.3-1) and (B.3-2) exist. We now proceed to show

!

∇ 2ϕ = ±

∫∫∫

⎡1 ⎤ σ ∇s2 ⎢ ⎥ dV = ∓ ⎣r ⎦ V

∫∫∫ σ 4 π δ (r ) dV !

(B.3-4)

V

or !

∇ 2ϕ = ∓ 4 π σ !

(B.3-5)

Therefore equation (B.3-1) is a solution of equation (B-1). 308

!

We will now follow a similar procedure by substituting

equation (B.3-2) into equation (B-2): !

! ∇ 2A = ±

∫∫∫

! ⎡1 ⎤ ψ ∇f2 ⎢ ⎥ dV = ± ⎣r ⎦ V

! ⎡1 ⎤ ψ ∇s2 ⎢ ⎥ dV ! (B.3-6) ⎣r ⎦ V

equations (B.4-1) and (B.4-2). We will let: ! ! ! ! C = B − B* !

(B.4-3)

(B.3-7)

We then have: ! ! ! ! ! ! ! ∇ • C = ∇ • B − ∇ • B* = K − K = 0 !

(B.4-4)

∫∫∫

and so: !

! ∇ 2A = ∓

∫∫∫

" ψ 4 π δ ( r ) dV !

show that it is uniquely determined by assuming just the ! opposite: that another vector field B* is also a solution of

V

or !

! ! ∇ 2A = ∓ 4 π ψ !

! (B.3-8)

Therefore equation (B.3-2) is a solution of equation (B-2). ! ! The scalar potential ϕ and the vector potential A obtained from equations (B.3-1) and (B.3-2) can then be used in equation (B-3) to determine the vector field.

B.4!

UNIQUENESS OF THE VECTOR FIELD

! ! ! ! ! ! ! ! ! ∇ × C = ∇ × B − ∇ × B* = D − D = 0 !

(B.4-5)

From equations (B.4-5) and (3.3-2), we have: ! ! ! C = ∇ϕ !

(B.4-6)

where ϕ is a scalar. From equation (B.4-4) we then obtain: ! ! ! ∇ • C = ∇ 2ϕ = 0 ! (B.4-7) !* ! ! Both B and B must have the same normal component over the surface S since the vector field is specified for S . We

We will now consider a region bounded by a surface S in ! which a vector field B is specified by its curl and divergence as

then have: ! ! ! ! C • nˆ = B • nˆ − B*• nˆ = 0 !

in equations (3.10-1) and (3.10-2): ! ! ! ∇• B = K !

Using Green’s first theorem given in equation (3.8-15) with

!

!

! ! ! ∇× B = D!

(B.4-1)

p = q = ϕ , we have:

(B.4-2)

!

! If the scalar and vector terms K and D are known, it remains to ! show that the vector field B is uniquely determined. We will

!

(B.4-8)

!

!

!

∫∫∫ (ϕ ∇ ϕ + ∇ϕ • ∇ϕ ) dV = ∫∫ ϕ ∇ϕ • dS ! 2

V

(B.4-9)

S

Using equations (B.4-6) and (B.4-7), we can rewrite equation 309

(B.4-9) as:

We will now use this theorem to determine the scalar field f

∫∫∫

!

V

!

!

!

!

(C • C ) dV = ∫∫ ϕ C • nˆ dS !

existing at a field point P within a volume V . The volume V is (B.4-10)

S

Using equation (B.4-8), we have:

∫∫∫

!

V

!

!

(C • C ) dV = 0 !

(B.4-11)

∫∫∫

(B.4-12)

V

Since the integrand is nonnegative, we therefore have: ! ! ! ! C = B − B* = 0 ! (B.4-13) and so: ! ! ! B = B* ! (B.4-14) ! This demonstrates that the vector field B determined from equations (B.4-1) and (B.4-2) is unique. This is sometimes referred to as the uniqueness theorem.

B.5!

Green’s second theorem as given in equation (3.8-19) is:

!

∫∫∫ ( p ∇ q − q ∇ p ) dV = ∫∫ V

2

S

dp ⎤ ⎡ dq ⎢⎣ p dn − q dn ⎥⎦ dS !

∫∫∫

(B.5-1)

∫∫

S

⎡ d ⎛ 1 ⎞ 1 df ⎤ ⎢ f dn ⎜⎝ r ⎟⎠ − r dn ⎥ dS ⎣ ⎦ (B.5-3)

or !

∫∫∫

V

⎛ 1⎞ f ∇ 2 ⎜ ⎟ dV = ⎝ r⎠

∫∫∫

V

+

!

1 2 ∇ f dV r

∫∫

S

⎡ d ⎛ 1 ⎞ 1 df ⎤ ⎢ f dn ⎜⎝ r ⎟⎠ − r dn ⎥ dS ! (B.5-4) ⎣ ⎦

Using equation (B.2-8), we then have: − 4π

∫∫∫

V

!

2

1 2 ⎞ ⎛ 2 ⎛ 1⎞ ⎜⎝ f ∇ ⎜⎝ r ⎟⎠ − r ∇ f ⎟⎠ dV = V

!

!

GREEN’S FORMULA

(B.5-2)

we can rewrite Green’s second theorem as: !

! 2 C dV = 0 !

1 q= ! r

p= f!

!

or !

enclosed by a surface S . Letting

!

f δ ( r ) dV =

∫∫∫

V

+

∫∫

S

1 2 ∇ f dV r

⎡ d ⎛ 1 ⎞ 1 df ⎤ ⎢ f dn ⎜⎝ r ⎟⎠ − r dn ⎥ dS ! (B.5-5) ⎣ ⎦

and so f is:

310

!

f =−

1 4π

∫∫∫

V

1 2 1 ∇ f dV − r 4π

∫∫

S

⎡ d ⎛ 1 ⎞ 1 df ⎤ ⎢ f dn ⎜⎝ r ⎟⎠ − r dn ⎥ dS ⎣ ⎦

!

(B.5-6)

where V is a volume that includes the coordinate system origin

( r = 0 ) . This equation is Green’s formula.

311

C.1!

Appendix C

!

DIFFERENTIAL OPERATIONS IN ORTHOGONAL CURVILINEAR COORDINATE SYSTEMS

For orthogonal curvilinear coordinate systems, we can

obtain the components of the metric tensor using equations (6.1-7), (6.6-3), and (6.6-5):

OPERATIONS IN ORTHOGONAL CURVILINEAR COORDINATES

!

gi j = 0 !

!

G = g11 g22 g33 !

!

g11 =

1 ! g11

(i ≠ j ) !

(C.1-1) (C.1-2)

g 22 =

1 ! g22

g 33 =

1 ! g33

(C.1-3)

From equation (6.9-2), we can obtain the scale factors: ! !

From the vector and tensor relations developed in this

book, expressions for differential operations in orthogonal curvilinear coordinate systems will be obtained in this appendix. These expressions will then be written in terms of tensor physical components and used to derive equations for differential

operations

coordinate systems.

in

certain

orthogonal

curvilinear

!

h1 = g11 !

h 2 = g22 !

h3 = g33 ! (C.1-4)

As discussed in Section 6.6, the components of the metric

tensor gi j can be obtained for a specific coordinate system by considering the expression for the square of the differential displacement ds in that coordinate system. For the case of orthogonal curvilinear coordinates, the expression for ( ds ) is 2

given by equation (6.6-1):

312

!

( ds )2 = g11 ( dx1 )

2

( )

( )

2

2

+ g22 dx 2 + g33 dx 3 !

(C.1-5)

!

( ds )

(

= h1 dx

) + ( h dx ) + ( h dx )

1 2

2 2

3 2

2

We can also write an equation for the differential volume

element dV (see Figure C-1) using equation (8.8-4):

From equation (C.1-4), we then have: 2

!

3

and as given in equation (4.10-4): ! ! d r = h1 dx1 eˆ1 + h2 dx 2 eˆ2 + h3 dx 3 eˆ3 !

! ! = d r • d r ! (C.1-6)

dV = h1 h2 h3 dx1 dx 2 dx 3 !

!

(C.1-8)

! The physical components for a point vector A are given by equations (6.11-1) and (6.11-2):

(C.1-7)

a1

!

a 1 = h1 a1 = g11 a1 =

!

a 2 = h2 a 2 = g22 a 2 =

a2

!

a 3 = h3 a 3 = g33 a 3 =

a3

g11

=

g22

g33

a1 ! h1

(C.1-9)

=

a2 ! h2

(C.1-10)

=

a3 ! h3

(C.1-11)

and, for higher order tensors, by equations (6.11-7) through (6.11-9): 1

!

! Figure C-1! Differential volume dV in orthogonal curvilinear coordinates x1, x 2 , x 3 . The coordinate system ( x1′, x2′ , x3′ ) is rectangular.

(

)

!

T

T

i1 i2 !j1 j 2 !

i1 i2 !j1 j 2 !

⎡ gi1 i1 gi2 i2 ! ⎤ 2 i i ! =⎢ ⎥ T j11j22! ! g g ! ⎢⎣ j1 j1 j2 j2 ⎥⎦ = ⎡⎣ gi1 i1 gi2 i2 ! g j1 j1 g j2 j2 !⎤⎦

(no sum)!



1 2

(C.1-12)

Ti1 i2 !j1 j2 !

(no sum)!

(C.1-13)

313

!

T

i1 i2 !j1 j 2 !

! !

1

= ⎡⎣ gi1 i1 gi2 i2 ! g j1 j1 g j2 j2 !⎤⎦ 2 T i1 i2 !j1 j2 !

(no sum)! (C.1-14) Expressions for differential operations are generally more

C.1.2! !

can be written: ! ! A = a 1 eˆ1 + a 2 eˆ2 + a 3 eˆ3 = a i eˆi !

! A = h1 a1 eˆ1 + h2 a 2 eˆ2 + h3 a 3 eˆ3 !

!

C.1.1! !

(C.1-15) (C.1-16)

The gradient of a scalar function f is a point vector. The

equations (C.1-9) through (C.1-11). We then have:

!

∂f 1 ∂f 1 ∂f ˆ ˆ ˆ ! 1 e1 + 2 e2 + 3 e3 (C.1-17) g11 ∂x g22 ∂x g33 ∂x 1

! 1 ∂f 1 ∂f 1 ∂f ˆ ˆ ∇f = eˆ3 ! 1 e1 + 2 e2 + h1 ∂x h2 ∂x h3 ∂x 3

)

!

ik ! ! 1 ∂ g ak G ! ∇• A = ∂x i G

(

(C.1-19)

)

(C.1-20)

! To write the divergence equation with the point vector A expressed in physical components, we can use equations (C.1-9) equations (C.1-19) and (C.1-20) become: !

the covariant components given in equation (8.13-8) with ! ∇f =

(

!

i ! ! 1 ∂a G ∇• A = ! ∂x i G

through (C.1-11). In terms of physical components, both

GRADIENT

physical components of this vector can be obtained by using

!

! The divergence of a vector function A can be obtained

from equations (9.1-8) and (9.1-9):

useful when vector functions are expressed in terms of physical components rather than in terms of contravariant or covariant ! components. From equations (4.12-8) and (4.12-5), a vector A

DIVERGENCE

(C.1-18)

! ! 1 ∂ ⎛ G 1⎞ 1 ∂ ⎛ G 2 ⎞ a a ∇• A = + ⎟ ⎟ 1⎜ 2 ⎜ ⎠ ⎠ G ∂x ⎝ g11 G ∂x ⎝ g22 ∂ ⎛ G 3 ⎞ a ⎟ ! (C.1-21) 3⎜ ⎠ G ∂x ⎝ g33

1

+

!

Using equation (C.1-2), we then have: ! ! 1 ∂ ! ∇• A = 1 G ∂x

!

(

)

∂ 2 G ∂x

1

g22 g33 a 1 + +

∂ 3 G ∂x

1

(

(

g33 g11 a 2

)

)

g11 g22 a 3 ! (C.1-22) 314

and, using equation (C.1-4), we have:

! ! ! ∇• A =

(

!

(

)

(

!

C.1.3! !

)

)

∂ ∂ 1 ⎡ ∂ 1 2 3 ⎤ h h a + h h a + 2 3 1 3 1 2 3 h1 h 2 a ⎢ ⎥⎦ ∂x ∂x ∂x h1 h2 h3 ⎣ (C.1-23)

CURL

! The curl of a vector function A expressed in covariant

components can be obtained from equation (9.2-8): !

! ! ε ij k ∇× A= G

⎧ ∂ak ∂aj ⎫ ! ⎨ j − k ⎬ ei ! ∂x ⎭ ⎩ ∂x

!

!

! ! ∇×A =

(C.1-24)

g11 ⎧ ∂ ∂ 3 2 ⎨ 2 ⎡⎣ g33 a ⎤⎦ − 3 ⎡⎣ g22 a ∂x G ⎩ ∂x

⎤ ⎫⎬ eˆ1 ⎦⎭

+

g22 ⎧ ∂ ∂ 1 3 ⎨ 3 ⎡⎣ g11 a ⎤⎦ − 1 ⎡⎣ g33 a ∂x G ⎩ ∂x

⎤ ⎫⎬ eˆ2 ⎦⎭

+

g33 ⎧ ∂ ∂ 2 1 ⎨ 1 ⎡⎣ g22 a ⎤⎦ − 2 ⎡⎣ g11 a ∂x ∂x G ⎩

⎤ ⎫⎬ eˆ3 ! ⎦⎭

or, using equation (C.1-2):

!

+

!

+

1 g22 g33 1 g11 g33 1 g11 g22

∂ ⎧ ∂ ⎡ 3 2 ⎨ 2 ⎣ g33 a ⎤⎦ − 3 ⎡⎣ g22 a ∂x ∂x ⎩

⎤ ⎫⎬ eˆ1 ⎦⎭

∂ ⎧ ∂ ⎡ 1 3 ⎨ 3 ⎣ g11 a ⎤⎦ − 1 ⎡⎣ g33 a ∂x ⎩ ∂x

⎤ ⎫⎬ eˆ2 ⎦⎭

∂ ⎧ ∂ ⎡ 2 1 ⎨ 1 ⎣ g22 a ⎤⎦ − 2 ⎡⎣ g11 a ∂x ⎩ ∂x

⎤ ⎫⎬ eˆ3 ! (C.1-26) ⎦⎭

The curl can also be expressed in terms of scale factors using

! Expressing the components of both the point vector A and the ! curl A in physical components, we have: !

! ! ! ∇×A =

equation (C.1-4): !

! ! ∂ 1 ⎧ ∂ ⎡ 3 2 ∇×A = ⎨ 2 ⎣ h3 a ⎤⎦ − 3 ⎡⎣ h2 a h2 h3 ⎩ ∂x ∂x

⎤ ⎫⎬ eˆ1 ⎦⎭

!

+

∂ 1 ⎧ ∂ ⎡ 1 3 ⎨ 3 ⎣ h1 a ⎤⎦ − 1 ⎡⎣ h3 a h1 h3 ⎩ ∂x ∂x

⎤ ⎫⎬ eˆ2 ⎦⎭

!

+

∂ 1 ⎧ ∂ ⎡ 2 1 ⎨ 1 ⎣ h2 a ⎤⎦ − 2 ⎡⎣ h1 a h1 h2 ⎩ ∂x ∂x

⎤ ⎫⎬ eˆ3 ! ⎦⎭

(C.1-27)

or (C.1-25)

!

! ! ∇×A =

1 h1 h2 h3

h1 eˆ1 ∂ ∂x1

h2 eˆ2 ∂ ∂x 2

h3 eˆ3 ∂ ∂x 3

h1 a 1

h2 a 2

h3 a 3

!

(C.1-28)

315

C.1.4! !

LAPLACIAN

The Laplacian of a scalar function f is given by: ! ! ∇ 2 f = ∇ • ∇f ! (C.1-29)

!

! ! ! ∇ ∇• A

(

!

)

i

=

(

(

!

)

(

)

∂ 1 ∂ ⎧ 1 ⎡ ∂ 1 2 h h a + ⎨ 2 3 1 2 h1 h 3 a i ⎢ ∂x hi ∂x ⎩ h1 h2 h3 ⎣ ∂x ∂ ⎤⎫ + 3 h1 h2 a 3 ⎥ ⎬ ! (no sum)! (C.1-33) ∂x ⎦⎭

)

From equations (C.1-17) and (C.1-22), the Laplacian of a scalar

The second term on the right side of equation (C.1-32) is

function f can be written as:

obtained by using equation (C.1-27):

∂ ⎡ g22 g33 ∂ f ⎤ 1 ∂ ⎡ g33 g11 ∂ f ⎤ ! ∇ f= 1⎥+ 2⎥ 1⎢ 2 ⎢ g11 ∂x ⎥⎦ G ∂x ⎢⎣ G ∂x ⎢⎣ g22 ∂x ⎥⎦

! ! ! eˆ ! ∇×∇× A = 1 h2 h 3

1

2

1 ∂ ⎡ g11 g22 ∂ f ⎤ + 3 ⎥! 3⎢ G ∂x ⎢⎣ g33 ∂x ⎥⎦

!

!

! !

∇2 f =

1 h1 h2 h3



!

(C.1-30) +

!

or, using equation (C.1-4): ⎧ ∂ ⎡ h2 h3 ∂ f ⎤ ∂ ⎡ h3 h1 ∂ f ⎤ ⎨ 1⎢ 1⎥+ 2⎥ 2 ⎢ ⎩⎪ ∂x ⎣ h1 ∂x ⎦ ∂x ⎣ h2 ∂x ⎦

∂ ⎡ h1 h2 ∂ f ⎤ ⎫ 3 ⎬ ! (C.1-31) ∂x 3 ⎢⎣ h3 ∂x ⎥⎦ ⎪⎭ ! The Laplacian of a vector function A can be calculated

using the relation given in equation (3.4-15): ! ! ! ! ! ! ! ! ∇ 2A = ∇ ∇ • A − ∇ × ∇ × A !

(

)

!

+

∂ ⎡ h3 ⎛ ∂ 2 1 ⎢ h h ⎜⎝ ∂x1 ⎡⎣ h 2 a ⎤⎦ − ∂x 2 ⎡⎣ h1 a ⎣ 1 2

⎤ ⎤⎞⎟ ⎥ ⎦⎠ ⎦

∂ ⎡ h2 ⎛ ∂ ∂ 1 ⎤ 3 ⎡ ⎡ ⎜ 3 ⎢ 3 ⎣ h1 a ⎦ − 1 ⎣ h3 a ∂x ⎣ h1 h3 ⎝ ∂x ∂x

⎧ ∂ ⎡ h1 ⎛ ∂ ∂ 3 ⎤ 2 ⎡ ⎡ − ⎨ 3⎢ ⎜ 2 ⎣ h3 a 3 ⎣ h2 a ⎦ ⎝ ∂x ⎪⎩ ∂x ⎣ h2 h3 ∂x

∂ ∂x 2

⎤⎫ ⎤⎞⎟ ⎥ ⎬ ⎦⎠ ⎪ ⎦⎭

⎤ ⎤⎞⎟ ⎥ ⎦⎠ ⎦

∂ ⎡ h3 ⎛ ∂ ∂ 2 ⎤ 1 ⎡ ⎡ − ⎜ 1⎢ 1 ⎣ h2 a ⎦ ∂x 2 ⎣ h1 a ∂x ⎣ h1 h2 ⎝ ∂x

eˆ3 ⎧ ∂ ⎡ h2 ⎛ ∂ ∂ 1 ⎤ 3 ⎡ ⎡ ⎨ 1⎢ ⎜ 3 ⎣ h1 a ⎦ − 1 ⎣ h3 a ⎝ h h ∂x ∂x ∂x h1 h2 ⎪ ⎣ 1 3 ⎩ −

! !

eˆ2 h1 h3



!

+

⎧ ∂ ⎨ 2 ⎪ ∂x ⎩

⎤⎫ ⎤⎞⎟ ⎥ ⎬ ⎦⎠ ⎪ ⎦⎭

⎤ ⎤⎞⎟ ⎥ ⎦⎠ ⎦

∂ ⎡ h1 ⎛ ∂ 3 2 ⎢ h h ⎜⎝ ∂x 2 ⎡⎣ h3 a ⎤⎦ − ∂x 3 ⎡⎣ h2 a ⎣ 2 3

⎤⎫ ⎤⎞⎟ ⎥ ⎬ ⎦⎠ ⎪ ⎦⎭

!

The i th component of the first term on the right side of this

(C.1-34) ! The Laplacian of the point vector function A can then be calculated from equation (C.1-32) by subtracting equation (C.

equation is obtained by using equations (C.1-18) and (C.1-23):

1-34) from equation (C.1-33). We then obtain:

(C.1-32)

316

!

! ∇ 2A =

⎤ 1 ⎧ ∂ ⎡ h2 h 3 ∂ 1 2 3 ˆ ˆ ˆ ⎨ 1⎢ 1 a e1 + a e2 + a e3 ⎥ h1 h2 h3 ⎩ ∂x ⎣ h1 ∂x ⎦

(

)

!

(

)

!

(

)

!

can be obtained from equations (7.4-9) through (7.4-12):

⎤ ∂ ⎡ h1 h 3 ∂ 1 2 3 ˆ ˆ ˆ 2 ⎢ 2 a e1 + a e2 + a e3 ⎥ ∂x ⎣ h2 ∂x ⎦

!

+

!

⎤ ∂ ⎡h h ∂ + 2 ⎢ 1 3 2 a 1 eˆ1 + a 2 eˆ2 + a 3 eˆ3 ⎥ ! (C.1-35) ∂x ⎣ h2 ∂x ⎦

C.1.5! !

Expressions for the Christoffel symbols of the second kind

Γjjj =

!

!

(no sum)!

Γjpj = −

useful in deriving differential operations in orthogonal curvilinear coordinate systems. For these coordinate systems,

(C.1-40)

1 ∂gj j 1 ∂ ∂ 1 ∂hj j = j ln g j j = j lnhj = 2 gj j ∂x 2 ∂x hj ∂x j ∂x

CHRISTOFFEL SYMBOLS

Expressions for the Christoffel symbols are also very

( j ≠ p ≠ k) !

Γjpk = g p i Γj k i = 0 !

hj 1 ∂gj j = − 2 gp p ∂x p hp

( )

∂hj 2

∂x p

! !

(C.1-41)

!

( j ≠ p , no sum)!

(C.1-42)

the Christoffel symbols of the first kind are given by equations (7.4-2) through (7.4-8): !

Γj k p = 0 !

( j ≠ k ≠ p) !

!

Γj jk = Γkjj =

(C.1-36)

1 ∂gj j 1 ∂ ∂ 1 ∂hj k = k ln gj j = k lnhj = 2 gj j ∂x 2 ∂x hj ∂x k ∂x

( j ≠ k , no sum)!

! !

!

!

Γj j j =

∂h j 1 ∂ gj j ! = h j 2 ∂x j ∂x j

∂h j 1 ∂ gj j ! Γj j p = − p = − hj 2 ∂x ∂x p

Γj k j = Γk j j

∂h j 1 ∂ gj j ! = k = hj 2 ∂x ∂x k

(no sum)!

(C.1-37)

C.1.6! !

( j ≠ p , no sum)! (C.1-38)

(C.1-43)

VOLUME ELEMENT

The volume element in orthogonal curvilinear coordinate

systems is given by equation (8.8-4): !

dV = h1 h2 h3 dx1 dx 2 dx 3 !

(C.1-44)

( j ≠ k , no sum)! (C.1-39)

317

C.2! !

DIFFERENTIAL OPERATIONS IN RECTANGULAR COORDINATE SYSTEMS

The rectangular coordinate system is shown in Figure C-2.

For rectangular coordinates ( x, y, z ) , we have: !

( x1, x2 , x3 ) = ( x, y, z ) !

(C.2-1)

The transformation to rectangular coordinates is: !

x1 = x !

x2 = y !

x3 = z !

(C.2-2)

From equation (C.2-2), we have: !

dx 2 = dy !

dx1 = dx !

dx 3 = dz ! (C.2-3)

For rectangular coordinates, the expression for ( ds ) is then: 2

!

( ds )2 = d x i d x i = ( dx )2 + ( dy )2 + ( dz )2 !

(C.2-4)

Figure C-2! Rectangular coordinate system.

and so: !

{g } = {δ } ij

ij

⎡ 1 0 0 ⎤ = ⎢ 0 1 0 ⎥! ⎢ ⎥ ⎢⎣ 0 0 1 ⎥⎦

From equations (C.2-5) and (C.1-2), we also have: (C.2-5)

!

{g } = {δ } ij

⎡ 1 0 0 ⎤ = ⎢ 0 1 0 ⎥! ⎢ ⎥ ⎢⎣ 0 0 1 ⎥⎦

G = 1!

(C.2-7)

From equations (C.1-4) and (C.2-5), the scale factors are:

Therefore from equation (C.1-3): ij

!

! (C.2-6)

h1 = hx = 1!

h2 = hy = 1 !

h3 = hz = 1 ! (C.2-8)

The volume element dV from equation (C.1-8) is: !

dV = dx dy dz !

(C.2-9) 318

The unit base vectors are: !

eˆ1 = eˆx = iˆ !

!

a

eˆ2 = eˆy = ˆj !

eˆ3 = eˆz = kˆ ! (C.2-10) ! The physical components of a point vector A are: x

a y = ay !

= ax !

a z = az !

(C.2-11)

! The vector A expressed in physical components is then: ! ! A = ax iˆ + ay ˆj + az kˆ ! (C.2-12) The gradient of a scalar function f is: !

! ∂f ˆ ∂f ˆ ∂f ˆ i+ j+ ∇f = k! ∂x ∂y ∂z

(C.2-13)

! Table C-1! Position vector r in rectangular coordinates.

! The divergence of a vector function A is:

The Laplacian of a scalar function f is:

! ! ∂a ∂ay ∂az + ∇• A = x + ! ! ∂x ∂y ∂z ! The curl of a vector function A is:

!

(C.2-14)

! ! ⎡ ∂a ∂a ⎤ ∂a ∂a ⎡ ∂a y ∂ax ⎤ ˆ ! ∇ × A = ⎢ z − y ⎥ iˆ + ⎡⎢ x − z ⎤⎥ ˆj + ⎢ − k ! (C.2-15) ∂x ⎦ ∂y ⎥⎦ ∂z ⎦ ⎣ ∂z ⎣ ∂y ⎣ ∂x

eˆx

!

! ! ∇ × A = ∂ ∂x ax

eˆy ∂

∂y

ay

eˆz ∂

∂z !

az

∂2 f ∂2 f ∂2 f ! + + ∂x 2 ∂y 2 ∂z 2

(C.2-17)

! The Laplacian of a vector function A is: !

∂ 2 ay ! ∂ 2 ax ∂ 2 az ˆ ˆ e + e + eˆz ! ∇ A= x y ∂y 2 ∂z 2 ∂x 2 2

(C.2-18)

The Christoffel symbols of the first kind are: !

(C.2-16)

∇2 f =

Γi j k = 0 !

(all i, j, k )!

(C.2-19)

The Christoffel symbols of the second kind are: !

Γjik = 0 !

(all i, j, k )!

(C.2-20) 319

!

( x , x , x ) = ( R, φ, z ) ! 1

2

3

(C.3-1)

Table C-2! Derivatives of unit base vectors in rectangular coordinates. The equations given in Table C-2 can be expressed in matrix form:

⎪⎧ ∂eˆi ⎨ ⎪⎩ ∂xj

!

⎡ 0 0 0 ⎤ ⎪⎫ ⎢ ⎥ ⎬= ⎢ 0 0 0 ⎥! ⎪⎭ ⎢⎣ 0 0 0 ⎥⎦

(C.2-21)

The volume element in rectangular coordinates is:

dV = dx dy dz !

!

(C.2-22)

Figure C-3! Cylindrical coordinate system. The transformation to rectangular coordinates is:

C.3! !

DIFFERENTIAL OPERATIONS IN CYLINDRICAL COORDINATE SYSTEMS

The cylindrical coordinate system is shown in Figure C-3.

For cylindrical coordinates ( R, φ , z ) , we have:

!

x = R cos φ !

y = R sin φ !

z = z!

(C.3-2)

where !

R=

( x )2 + ( y )2 !

(0 ≤ R < ∞) !

(C.3-3)

320

!

y φ = tan −1 ! x

(0 ≤ φ ≤ 2π ) !

(C.3-4)

From equation (C.3-2), we have: !

!

!

dz =

∂y ∂y ∂y dR + dφ + dz = sin φ dR + R cos φ dφ ! (C.3-6) ∂R ∂φ ∂z

!

∂z ∂z ∂z dR + dφ + dz = dz ! ∂R ∂φ ∂z

system are:

(C.3-7)

( ds )2 = ( dR )2 + ( R )2 ( dφ )2 + ( dz )2 !

{g } ij

⎡ 1 ⎢ =⎢ 0 ⎢ 0 ⎣

0

( R )2 0

0 ⎤ ⎥ 0 ⎥! 1 ⎥⎦

(C.3-8)

(C.3-9)

{g } ij

0 1

( R )2 0

0 ⎤ ⎥ 0 ⎥! ⎥ 1 ⎥⎦

h2 = hφ = R !

h3 = hz = 1 !

(C.3-12)

The volume element dV from equation (C.1-8) is:

dV = R dR dφ dz !

(C.3-13)

The covariant unit base vectors for the cylindrical coordinate !

eˆ1 = eˆR !

eˆ2 = eˆφ !

eˆ3 = eˆz !

(C.3-14)

!

a

R

= aR !

a φ = R aφ !

a z = a z ! (C.3-15)

! The vector A expressed in physical components is then: ! ! A = a R eˆR + R aφ eˆφ + a z eˆz ! (C.3-16) The gradient of a scalar function f is: ! ∂f 1 ∂f ∂f ∇f = eˆR + eˆφ + eˆz ! ∂R R ∂φ ∂z ! The divergence of a vector function A is:

Therefore from equation (C.1-3): ⎡ 1 ⎢ =⎢ 0 ⎢ ⎢ 0 ⎣

(C.3-11)

! The physical components of a point vector A are:

and so:

!

h1 = hR = 1 !

!

For cylindrical coordinates, the expression for ( ds ) is:

!

2

From equations (C.1-4) and (C.3-9), the scale factors are:

2

!

G = ( R) !

∂x ∂x ∂x dR + dφ + dz = cos φ dR − Rsin φ dφ ! (C.3-5) ∂R ∂φ ∂z

dx =

dy =

!

!

(C.3-10) !

! ! 1 ∂ 1 ∂aφ ∂az ! ∇• A = R aR ) + + ( R ∂R R ∂φ ∂z

(C.3-17)

(C.3-18)

From equations (C.3-9) and (C.1-2), we also have: 321

! The curl of a vector function A is: !

The Laplacian of the scalar function f is:

! ! ⎡ 1 ∂az ∂aφ ⎤ ⎡ ∂aR ∂az ⎤ ˆ − − eˆR + ⎢ ∇× A= ⎢ eφ ⎥ ∂z ⎦ ⎣ ∂z ∂R ⎥⎦ ⎣ R ∂φ

(

)

⎡ 1 ∂ R aφ 1 ∂aR ⎤ ⎥ eˆz ! (C.3-19) +⎢ − R ∂φ ⎥ ⎢⎣ R ∂R ⎦

! or

eˆR

!

! ! 1 ∂ ∇×A = ∂R R aR

R eˆφ ∂

∂φ

R aφ

eˆz ∂

∂z !

∂2 f 1 ∂ f 1 ∂2 f ∂2 f ! + + + ∂R 2 R ∂R ( R )2 ∂φ 2 ∂z 2 ! The Laplacian of a vector function A is: !

!

! (C.3-20)

az

∇2 f =

(C.3-21)

! ⎡ a ⎤ 2 ∂aφ ∇ 2A = ⎢ ∇ 2 aR − − R 2 ⎥ eˆR 2 ( R ) ∂φ ( R ) ⎥⎦ ⎢⎣ ⎡ aφ ⎤ 2 ∂aR eˆ + ⎡⎣∇ 2az ⎤⎦ eˆz ! (C.3-22) + ⎢∇ 2 aφ + − 2 2⎥ φ ( R ) ∂φ ( R ) ⎥⎦ ⎢⎣

The Christoffel symbols of the first kind are: !

Γ221 = Γφφ R = − R !

(C.3-23)

!

Γ122 = Γ212 = ΓRφφ = Γφ Rφ = R !

(C.3-24)

!

Γi j k = 0 !

(C.3-25)

(for all other i, j, k )!

The Christoffel symbols of the second kind are:

! Table C-3! Position vector r in cylindrical coordinates.

!

1 Γ22 = ΓφφR = −R !

!

Γ122 = Γ212 = ΓRφφ = ΓφφR =

1 ! R

!

Γjik = 0 !

(for all other i, j, k )! (C.3-28)

(C.3-26) (C.3-27)

322

⎡ eˆx ⎤ ⎡ cosφ − sin φ 0 ⎤ ⎡ eˆR ⎤ ⎢ ⎥ ⎢ ⎥ ⎥⎢ ⎢ eˆy ⎥ = ⎢ sin φ cosφ 0 ⎥ ⎢ eˆφ ⎥ ! ⎢ ˆ ⎥ ⎢ ⎢ ⎥ 0 1 ⎥⎦ ⎢ eˆz ⎥ ⎢⎣ ez ⎥⎦ ⎣ 0 ⎣ ⎦

!

(C.3-31)

The Jacobian for the transformation from the cylindrical coordinate system to the rectangular coordinate system is obtained using equation (4.13-10): Table C-4! Derivatives of unit base vectors in cylindrical coordinates. The equations given in Table C-4 can be expressed in matrix

⎧⎪ ∂eˆi ⎨ ⎩⎪ ∂xj

⎡ 0 eˆφ ⎫⎪ ⎢ ⎬ = ⎢ 0 − eˆR ⎭⎪ ⎢ 0 ⎣ 0

0 ⎤ ⎥ 0 ⎥! ⎥ 0 ⎦

(C.3-29)

systems and base vectors of cylindrical coordinate systems are obtained from equations (4.8-10) and (C.3-2) as shown in

!

⎤ ⎡ cosφ sin φ 0 ⎤ ⎡ eˆx ⎤ ⎥ ⎢ ⎥ ⎥⎢ ⎥ = ⎢ − sin φ cosφ 0 ⎥ ⎢ eˆy ⎥ ! ⎥ ⎢ ⎢ ⎥ 0 0 1 ⎥⎦ ⎢ eˆz ⎥ ⎥⎦ ⎣ ⎣ ⎦

= h1 h2 h3 = R !

(C.3-32)

(C.3-33)

DIFFERENTIAL OPERATIONS IN SPHERICAL COORDINATE SYSTEMS

The spherical coordinate system is shown in Figure C-4.

For spherical coordinates ( R, θ , φ ) , we have:

( x , x , x ) = ( R, θ , φ ) ! 1

!

2

3

(C.4-1)

The transformation to rectangular coordinates is given by:

Examples 4-2 and 4-3:

⎡ eˆR ⎢ ⎢ eˆφ ⎢ ˆ ⎢⎣ ez

∂( R, φ , z )

dV = R dR dφ dz !

C.4! !

The relations between base vectors of rectangular coordinate

∂( x, y, z )

The volume element in cylindrical coordinates is: !

form:

!

J=

!

! (C.3-30)

x = Rsin θ cos φ !

y = Rsin θ sin φ !

z = R cosθ !

(C.4-2)

(0 ≤ R < ∞)!

(C.4-3)

where !

R=

( x )2 + ( y )2 + ( z )2 !

323

dx = sin θ cos φ dR + R cosθ cos φ dθ − R sin θ sin φ dφ !

!

dy =

!

∂y ∂y ∂y dR + dθ + dφ ! ∂R ∂θ ∂φ

(C.4-8)

dy = sin θ sin φ dR + R cosθ sin φ dθ + R sin θ cosφ dφ !

!

∂z ∂z ∂z dR + dθ + dφ ! ∂R ∂θ ∂φ

!

dz =

!

dz = cosθ dR − R sin θ dθ !

(C.4-7)

(C.4-9) (C.4-10) (C.4-11)

For spherical coordinates, the expression for ( ds ) is: 2

!

( ds )2 = ( dR )2 + ( R )2 ( dθ )2 + ( R )2 sin 2 θ ( dφ )2 !

(C.4-12)

and so:

!

Figure C-4! Spherical coordinate system. −1

( x )2 + ( y )2

!

θ = tan

!

y ! φ = tan x

z

!

−1

(0 ≤ θ ≤ π ) !

(0 ≤ φ ≤ 2 π )!

(C.4-4)

!

dx =

∂x ∂x ∂x dR + dθ + dφ ! ∂R ∂θ ∂φ

!

(C.4-6)

ij

0

⎤ ⎥ 0 ⎥! ⎥ ( R )2 sin 2 θ ⎥⎦ 0

( R )2 0

(C.4-13)

Therefore from equation (C.1-3):

(C.4-5)

From equation (C.4-2), we have:

{g }

⎡ 1 ⎢ =⎢ 0 ⎢ ⎢⎣ 0

{g } ij

⎡ 1 ⎢ ⎢ 0 =⎢ ⎢ ⎢ 0 ⎣

0 1

( R )2 0

⎤ ⎥ 0 ⎥ ⎥! ⎥ 1 2 2 ( R ) sin θ ⎥⎦ 0

(C.4-14)

From equations (C.4-13) and (C.1-2), we also have: 324

G = ( R ) sin θ ! 4

!

2

(C.4-15)

From equations (C.1-4) and (C.4-13), the scale factors are: !

h2 = hθ = R !

h1 = hR = 1 !

dV = ( R ) sin θ dR dθ dφ ! 2

(

eˆ3 = eˆφ ! ! The physical components of a point vector A are: eˆ2 = eˆθ !

(

)

1 ⎡ ∂ aφ sin θ ∂aθ ⎤ 1 ⎡ 1 ∂aR ∂ R aφ ⎢ ⎥ eˆR + ⎢ − − ∂φ ⎥ ∂R ∂θ R sin θ ⎢ R ⎢ sin θ ∂φ ⎣ ⎦ ⎣

+

(C.4-17)

system are: eˆ1 = eˆR !

! ! ∇× A=

!

The covariant unit base vectors for the spherical coordinate !

!

h3 = hφ = R sin θ ! (C.4-16)

The volume element dV from equation (C.1-8) is: !

! The curl of a vector function A is:

(C.4-18)

!

! ! ∇×A =

eˆR 1

( R)

2

sin θ



∂R

aR

) ⎤⎥ eˆ ⎥⎦

θ

1 ⎡ ∂( R aθ ) ∂aR ⎤ − ⎢ ⎥ eˆφ ! (C.4-23) ∂θ ⎦ R ⎣ ∂R

R eˆθ ∂

∂θ

R aθ

R sin θ eˆφ ∂

∂φ

!

(C.4-24)

R sin θ aφ

a θ = R aθ ! a φ = R sin θ aφ ! (C.4-19) = aR ! ! The vector A expressed in physical components is then: ! ! A = a R eˆR + R aθ eˆθ + R sin θ aφ eˆφ ! (C.4-20)

!

a

R

The gradient of a scalar function f is: ! ∂f 1 ∂f 1 ∂f ∇f = eˆR + eˆθ + eˆφ ! ! ∂R R ∂θ R sin θ ∂φ ! The divergence of a vector A is: ! !

(C.4-21)

! ! ∂ 1 ∂ ⎡ 2 ⎤ 1 1 ∂aφ ∇• A = + R a a sin θ + ( ) [ ] R⎦ θ R sin θ ∂θ R sin θ ∂φ ( R )2 ∂R ⎣ !

! Table C-5! Position vector r in spherical coordinates.

(C.4-22) 325

The Laplacian of a scalar function f is: ∇2 f =

!

1

( R)

2

∂ ⎡ 2 ∂f ⎤ ∂ ⎡ 1 ∂f ⎤ sin θ ( R) ⎥ + 2 ⎢ ⎢ ∂R ⎣ ∂R ⎦ ( R ) sin θ ∂θ ⎣ ∂θ ⎥⎦

∂2 f + ! ! (C.4-25) ( R )2 sin 2 θ ∂φ 2 ! The Laplacian of a vector function A is: 1

! ⎡ ∂aφ ⎤ 2 2 ∂aθ 2 cosθ 2 ∇ A = ⎢ ∇ 2 aR − a − − a − ⎥ eˆR R θ 2 2 2 2 ∂ θ ∂ φ sin θ sin θ R R R R ( ) ( ) ( ) ( ) ⎢⎣ ⎥⎦ 2

!

! !

⎡ 1 2 ∂aR 2 cosθ ∂aφ ⎤ + ⎢∇ 2 aθ − a + − ⎥ eˆθ θ 2 2 2 2 2 ∂ θ ∂ φ sin θ sin θ R R R ( ) ( ) ( ) ⎢⎣ ⎥⎦ ⎡ ∂aR 1 2 2 cosθ ∂aθ ⎤ + ⎢∇ 2 aφ − a + + ⎥ eˆφ φ 2 2 2 2 2 ∂ φ ∂ φ sin θ sin θ sin θ R R R ( ) ( ) ( ) ⎢⎣ ⎥⎦ ! (C.4-26)

The Christoffel symbols of the first kind are:

Γ221 = Γθ θ R = − R !

(C.4-27)

Γ331 = Γφφ R = − R sin 2θ !

(C.4-28)

!

Γ212 = Γ122 = Γθ Rθ = ΓRθ θ = R !

(C.4-29)

!

Γ332 = Γφφθ = − ( R ) sin θ cosθ !

(C.4-30)

! !

2

!

Γ313 = Γ133 = Γφ Rφ = ΓRφφ = R sin 2θ !

(C.4-31)

!

Γ323 = Γ233 = Γφθ φ = Γθ φφ = ( R ) sin θ cosθ !

(C.4-32)

!

Γi j k = 0 !

(C.4-33)

2

(for all other i, j, k )!

The Christoffel symbols of the second kind are: !

1 Γ22 = ΓθRθ = −R !

(C.4-34)

!

1 Γ33 = ΓφφR = − R sin 2θ !

(C.4-35)

!

Γ212 = Γ122 = ΓθθR = ΓRθθ =

!

2 Γ 33 = Γφφθ = − sin θ cosθ !

!

3 Γ 31 = Γ133 = ΓφφR = ΓRφφ =

!

φ Γ323 = Γ233 = Γφθ = Γθφφ = cot θ !

(C.4-39)

!

Γjik = 0 !

(C.4-40)

1 ! R

1 ! R

(for all other i, j, k )!

(C.4-36) (C.4-37) (C.4-38)

326

⎡ eˆx ⎤ ⎡ sin θ cosφ cosθ cosφ − sin φ ⎢ ⎥ ⎢ ! ⎢ eˆy ⎥ = ⎢ sin θ sin φ cosθ sin φ cosφ ⎢ ˆ ⎥ ⎢ cosθ − sin θ 0 ⎢⎣ ez ⎥⎦ ⎣

⎤ ⎡ eˆR ⎥⎢ ⎥ ⎢ eˆφ ⎥ ⎢ eˆ ⎦ ⎢⎣ z

⎤ ⎥ ⎥! ⎥ ⎥⎦

(C.4-43)

The Jacobian for the transformation from the spherical coordinate system to the rectangular coordinate system is obtained using equation (4.13-10): ∂( x, y, z )

Table C-6! Derivatives of unit base vectors in spherical coordinates.

!

The equations given in Table C-6 can be expressed in matrix

The volume element in spherical coordinates is:

form:

!

!

⎧⎪ ∂eˆi ⎨ ⎪⎩ ∂xj

⎡ 0 eˆθ ⎫⎪ ⎢ ⎬ = ⎢ 0 − eˆR ⎭⎪ ⎢ 0 ⎢⎣ 0

⎤ ⎥ cosθ eˆφ ⎥! ⎥ − sin θ eˆR − cosθ eˆθ ⎥ ⎦

J=

∂( R, θ , φ )

= h1 h2 h3 = ( R ) sin θ !

dV = ( R ) sin θ dR dθ dφ ! 2

2

(C.4-44)

(C.4-45)

sin θ eˆφ

(C.4-41)

The relations between base vectors of rectangular coordinate systems and base vectors of spherical coordinate systems are obtained from equations (4.8-10) and (C.4-2):

⎡ ⎢ ! ⎢ ⎢ ⎢⎣

eˆR ⎤ ⎡ sin θ cosφ sin θ sin φ cosθ ⎥ ⎢ eˆθ ⎥ = ⎢ cosθ cosφ cosθ sin φ − sin θ eˆφ ⎥ ⎢ − sin φ cosφ 0 ⎥⎦ ⎣

⎤⎡ ⎥⎢ ⎥⎢ ⎥⎢ ⎦ ⎢⎣

eˆx ⎤ ⎥ eˆy ⎥ ! (C.4-42) ⎥ eˆz ⎥ ⎦

327

Appendix D

Vector in component form: !

! A = ax , ay , az !

(D-1)

!

! A = ax iˆ + ay ˆj + ak kˆ !

(D-2)

Vector magnitude:

VECTOR IDENTITIES

!

! A =A=

( a x )2 + ( a y ) + ( a z )2 = 2

! ! A• A !

(D-3)

Unit vector: !

In this appendix are listed some of the more important

vector identities and relations. The following notation is used:
 ! ! ! ! ! ! ! A , B , C , D , E , and F are arbitrary vectors. ! ax , ay , and az are rectangular components of the A vector.! !

f , g , p , and q are arbitrary scalar functions. !

λ is a scalar constant.

!

m is an integer.

! ! ! !

! r is a line position vector. iˆ , ˆj , and kˆ are unit base vectors along the x, y, and z ! rectangular coordinate axes, respectively. nˆ is a unit normal vector.

!

! ! A A Aˆ = ! = ! A A

!

! A = A Aˆ !

(D-5)

!

Aˆ = cosθx iˆ + cosθ y ˆj + cosθ z kˆ !

(D-6)

(D-4)

Vector components: !

! A • iˆ = ax !

!

! ! ! ! A = A • iˆ iˆ + A • ˆj ˆj + A • kˆ kˆ !

(

) (

! A • ˆj = ay !

) (

! A • kˆ = az !

)

(D-7) (D-8) 328

Position vector:

Multiplication by a scalar:

!

! r = x iˆ + y ˆj + z kˆ !

!

! r =r=

( x )2 + ( y )2 + ( z )2 !

(D-9)

!

! λ A = λ ax iˆ + λ ay ˆj + λ az kˆ !

(D-16)

(D-10)

!

! ! ! ! λ ( A + B) = λ A + λ B !

(D-17)

!

(λ + µ ) A = λ A + µ A !

!

(λ µ ) A = λ ( µ A) = λ µ A !

Vector addition and subtraction: !

! ! A ± B = ( ax ± bx ) iˆ + ay ± by ˆj + ( az ± bz ) kˆ !

(D-11)

!

! ! ! ! A+ B= B+ A!

(D-12)

!

! ! ! ! ! ! ( A + B) + C = A + ( B + C ) !

(D-13)

(

)

!

Wi′ =

∂ x′ i

Wj !

(D-14)

Contravariant components: !

V′i =

∂ x′ i j V ! ∂x j

(D-15)

!

!

!

!

(D-18) !

Scalar product: ! ! ! ! ! A • B = A B cosθ !

(D-19)

(0 ≤ θ ≤ π ) !

(D-20)

!

! ! A • B = ax bx + ay by + az bz !

(D-21)

!

! ! ! " A • B = B • A!

(D-22)

!

! ! ! ! ! ! ! A•(B + C) = A• B + A• C !

(D-23)

!

! ! ! ! ! ! ! ! λ ( A • B) = (λ A) • B = A • (λ B) = ( A • B) λ !

(D-24)

Covariant components: ∂x j

!

Angle between vectors: ! ! A• B ! cosθ = ! ! = A B

! ! A• B ! ! ! !! A• A B• B

(D-25)

329

Vector product:

Scalar triple product:

! ! ! ! A × B = A B sin θ nˆ !

!

(0 ≤ θ ≤ π ) !

(D-26)

! ! ! A × B = ay bz − az by iˆ + ( az bx − ax bz ) ˆj + ax by − ay bx kˆ ! (D-27)

(

)



!

(

ˆj



eˆ1

! ! A × B = ax

ay

az = a1 a2

bx

by

bz

b1

!

! ! ! ! A× B = −B× A!

!

! ! ! A × A = 0!

!

! ! ! ! ! ! ! A × (B + C) = A × B + A × C !

!

eˆ2 b2

)

eˆ3 a3 !

! (D-28)

! (D-29)

!

(D-30)

! !

(D-31)

ay

az

by

bz !

cx

cy

cz

(D-34)

! ! ! ! ! ! ! ! ! ! ! ! A• B × C = A × B•C = B•C × A = B × C • A       = C • A × B = C × A• B!

(D-35)

! ! ! ! ! ! ! ! ! ! ! ! A• B × C = − A•C × B = − A × C • B = − B• A × C ! ! ! ! ! ! ! ! ! = − B × A•C = − C • B × A = − C × B• A!

(D-36)

! ! ! ! ! ! ! ! ! ! ! ! A• A × B = A × A• B = A• B × A = A × B• A ! ! ! ! ! ! = A• B × B = A × B• B = 0!

(D-37)

!

!!! !!! !!! ⎡⎣ A B C ⎤⎦ = ⎡⎣ B C A ⎤⎦ = ⎡⎣C A B ⎤⎦ !

(D-38)

!

!!! !!! !!! !!! ⎡⎣ A B C ⎤⎦ = − ⎡⎣ A C B ⎤⎦ = − ⎡⎣ B A C ⎤⎦ = − ⎡⎣C B A ⎤⎦ !

(D-39)

!

!!! !!! !!! ⎡⎣ A A B ⎤⎦ = ⎡⎣ A B B ⎤⎦ = ⎡⎣ A B A ⎤⎦ = 0 !

(D-40)

(D-32)

! ! Unit normal to the plane defined by A and B : !

!

b3

! ! ! ! ! ! ! ! λ ( A × B) = (λ A) × B = A × (λ B) = ( A × B) λ !

! ! A× B nˆ = ! ! ! A× B

!

ax ! ! ! A • B × C = bx

(D-33) Vector triple product: !

! ! ! ! ! ! A × ( B × C ) ≠ ( A × B) × C !

(D-41)

330

! ! ! ! ! ! !!! 2 ⎡⎣ A × B B × C C × A ⎤⎦ = ⎡⎣ A B C ⎤⎦ !

!

! ! ! ! ! ! ! ! ! A × ( B × C ) = ( A • C ) B − ( A • B)C !

(D-42)

!

! ! ! ! ! ! ! ! ! ( A × B) × C = ( A • C ) B − ( B • C ) A !

(D-43)

! ! ! ! ! ! !! ! !!! !!! !! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A C D ⎤⎦ ⎡⎣ B E F ⎤⎦ − ⎡⎣ A E F ⎤⎦ ⎡⎣ B C D ⎤⎦ ! (D-53)

!

! ! ! ! ! ! ! ! ! ! A × ( B × C ) + B × (C × A ) + C × ( A × B ) = 0 !

(D-44)

! ! ! ! ! ! !! ! !!! !!! ! !! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A B D ⎤⎦ ⎡⎣C E F ⎤⎦ − ⎡⎣ A B C ⎤⎦ ⎡⎣ D E F ⎤⎦ ! (D-54)

! ! ! ! ! ! ! ! ! ! ! ! ( A × B ) • (C × D ) = ( A • C )( B • D ) − ( A • D )( B • C ) ! (D-45)

! ! ! ! ! ! A•C ( A × B ) • (C × D ) = B! • C!

! ! !

!

!

!

!

! !

!

!

! !

(D-46) !

!

!

( A × B ) • ( B × C ) + ( A • B )( B • C ) = 2 ( A • B )( B • C ) ! ! 2 − ( A • C )( B ) !

(D-47)

!

! ! ! ! ! !2 ! A• A A• B A× B = ! ! ! ! = A A• B B• B

!

( A × B ) × (C × D ) = ⎡⎣ A B D ⎤⎦ C − ⎡⎣ A B C ⎤⎦ D !

!

( A × B ) × (C × D ) = ⎡⎣ A C D ⎤⎦ B − ⎡⎣ B C D ⎤⎦ A !

(D-50)

!

! ! ! ! !!! ! ( A × B ) × ( A × C ) = ⎡⎣ A B C ⎤⎦ A !

(D-51)

!

!

!

!

!

!

!

!

! ! ! !

!! ! !

2

! !

! ! A• D ! ! ! B• D

!2 ! ! 2 B − ( A • B) ! ! ! ! !

!! ! !

(D-52)

! ! ! ! ! ! !!! ! ! ! !!! ! ! ! ⎡⎣ A × B C × D E × F ⎤⎦ = ⎡⎣ A B E ⎤⎦ ⎡⎣C D F ⎤⎦ − ⎡⎣ A B F ⎤⎦ ⎡⎣C D E ⎤⎦ ! (D-55)

Other products: !

!

(D-48) (D-49)

!

!! ! ! !! ! ! !! ! ! !!! ! ! ⎡⎣ B C D ⎤⎦ A − ⎡⎣ A C D ⎤⎦ B + ⎡⎣ A B D ⎤⎦ C − ⎡⎣ A B C ⎤⎦ D = 0 ! (D-56) ! ! ! A = ( A • nˆ ) nˆ + nˆ × ( A × nˆ ) !

! ! ! ! ! ! A • D A• E A• F ! ! ! ! ! ! ! ! ! ! ! ! ( A × B • C )( D × E • F ) = B! • D! B! • E! B! • F! ! C•D C•E C•F ! ! ! ! ! ! A • A A• B A•C !!! 2 ! ! ! ! ! ! ⎡⎣ A B C ⎤⎦ = B • A B • B B • C ! ! ! ! ! ! ! C • A C • B C •C

(D-57)

(D-58)

(D-59)

Vector differentiation: !

! dA ( t ) dax ( t ) ˆ day ( t ) ˆ daz ( t ) ˆ = i+ j+ k! dt dt dt dt

!

! dA ( t ) = dax ( t ) iˆ + day ( t ) ˆj + daz ( t ) kˆ !

(D-60) (D-61) 331

!

!

!

!

!

!

!

! !

! ! d ! ! dA dB ( A ± B ) = dt ± dt ! dt

(D-62)

! ! d dA df ! ( f A ) = f dt + dt A ! dt ! ! ! dB dA ! d ! ! ( A • B ) = A • dt + dt • B ! dt

! d A

! d A ! = Aˆ • dt dt

! r ( t ) = x ( t ) iˆ + y ( t ) ˆj + z ( t ) kˆ !

(D-70)

!

! d r = dx1 eˆ1 + dx 2 eˆ1 + dx 3 eˆ1 = dx i eˆi !

(D-71)

(D-64)

!

! d r = dr =

(D-65)

Space curves:

(D-66)

! ! ! dB dA ! d ! ! ( A × B ) = A × dt + dt × B ! dt ! ! d !!! ⎡ ! ! dC ⎤ ⎡ ! dB ! + A C ⎡ A B C ⎤⎦ = ⎢ A B dt ⎣ dt ⎥⎦ ⎢⎣ dt ⎣

! (D-63)

! ! dA dA ! A• =A dt dt

(D-67) ! ⎤ ⎡ dA ! ! ⎤ ⎥ + ⎢ dt B C ⎥ ! (D-68) ⎦ ⎣ ⎦

! ! ! ⎡ ! dC ⎤ ! ⎡ dB ! ⎤ d ! ! ! ⎡ A × ( B × C ) ⎤⎦ = A × ⎢B × ⎥ + A × ⎢ dt × C ⎥ ! dt ⎣ dt ⎣ ⎦ ⎣ ⎦ ! ! ! dA + × ⎡⎣ B × C ⎤⎦ ! dt

Position vector:

( dx1 )2 + ( dx2 )2 + ( dx3 )2

= ds !

! ! d r d r ! ! dr dr dt dt ! Tˆ = = = ! = dr ds ds dr dt dt

!

(D-72)

(D-73)

Arc length: ! s=



t2

t1

!

ds dt = dt

s=





t2

t1

t2

t1

dr dt = dt 2



t2

t1

dr dt = dt 2



t2

t1

! ! dr dr • dt ! dt dt

(D-74)

2

⎡ dx ⎤ ⎡ dy ⎤ ⎡ dz ⎤ ! ⎢⎣ dt ⎥⎦ + ⎢⎣ dt ⎥⎦ + ⎢⎣ dt ⎥⎦ dt

(D-75)

(D-69)

332

Gradient: !

! ∂ ˆ ∂ ˆ ∂ ˆ ∇= i+ j+ k! ∂x ∂y ∂z

!

! ∂f ˆ ∂f ˆ ∂f ˆ ∇f = i+ j+ k! ∂x ∂y ∂z

!

! ∂f ∇f = eˆi ! ∂xi

! !

! df ! = ∇f • Tˆ = Tˆ • ∇ f ! ds

!

df ! = ∇f • nˆ ! dn

!

!

(D-85)

!

! ! ! ∇ ( f g ) = f ∇g + g ∇f !

(D-86)

!

! ! ! ⎡ f ⎤ g ∇f − f ∇g ∇⎢ ⎥ = ! ⎣g⎦ ( g )2

!

! m m −1 ! ∇ ( f ) = m ( f ) ∇f !

(D-88)

!

! ! ! ∇r m = m r m−1 ∇r = m r m−1 rˆ = m r m−2 r !

(D-89)

!

! ! ⎛ 1⎞ r rˆ ∇⎜ ⎟ = − 3 = − 2 ! ⎝ r⎠ r r

(D-90)

!

! ! ∂ f (r ) ! ∂ f (r ) r ∂ f (r ) ∇f ( r ) = ∇r = = rˆ ! ∂r ∂r r ∂r

(D-91)

!

Df ∂f ! ! = + v • ∇f ! Dt ∂t

(D-92)

!

! ∂g ! ∇g ( f ) = ∇f ! ∂f

(D-93)

(D-77)

(D-79)

(

df ds

! ! ! ∇ ( f + g ) = ∇f + ∇g !

(D-78)

! ! df = ∇f • d r !

!

(D-76)

!

! = ∇f = max

)

(D-80)

(D-81) 2

2

2

⎡ df ⎤ ⎡ df ⎤ ⎡ df ⎤ ⎢⎣ dx ⎥⎦ + ⎢ dy ⎥ + ⎢⎣ dz ⎥⎦ ! ⎣ ⎦

(D-82)

! ∇f nˆ = ± ! ! ∇f

(D-83)

! ! ∇ ( λ f ) = λ ∇f !

(D-84)

(D-87)

Vector integration: !



! A ( t ) dt = iˆ ax ( t ) dt + ˆj ay( t ) dt + kˆ az ( t ) dt !







(D-94) 333

!

!



! ! λ A ( t ) dt = λ A ( t ) dt !



b



b



! A ( t ) dt +

a

!

∫ ∫

!

!





c

! B ( t ) dt =

! A ( t ) dt +

! A ( t ) dt =

b

! f d r = iˆ



! ! A • dr =

! ! A × d r = iˆ

∫(



(D-96)

!

! A ( t ) dt !

(D-97)

Divergence:



f dy + kˆ



f dz !

)

)



z

(D-101)

∫∫

! f dS =

∫∫

! ! A • dS =

S

∫∫

f nˆ dS !

(D-102)

∫∫

S

(D-103)

∫∫∫ a dV + ˆj ∫∫∫ a dV + kˆ ∫∫∫ a dV !(D-105) x

V

y

V

z

V

(D-106)

!

! ! ∂b ∇• B = i ! ∂xi

(D-107)

!

! ! ! ! ! ! ! ∇ • ( A + B) = ∇ • A + ∇ • B !

(D-108)

!

! ! ! ! ! ! ∇ • ( f A ) = A • ∇f + f ∇ • A !

(D-109)

!

! ! ! ! ∇ • f ∇g = f ∇ 2g + ∇f • ∇g !

(D-110)

!

! ! ∇ • r = 3!

(D-111)

!

! 2 ∇ • rˆ = ! r

!

! ! ∇ • r m r = ( m + 3) r m !

S

! A • nˆ dS !

(D-104)

S

! ! ∂bx ∂by ∂bz ! ∇• B = + + ∂x ∂y ∂z

(D-100)

C

∫∫

! A × nˆ dS !

!

x

y

∫∫∫

! A dV = iˆ

V

(D-99)

∫ (a dx − a dz) + kˆ ( a dy − a dx ) ! ∫

f Tˆ ds !

S

(D-98)

ax dx + ay dy + az dz !

ay dz − az dy + ˆj

! f dr =

S

!

! ! ⎡⎣ A ( t ) + B ( t ) ⎤⎦ dt !

a

f dx + ˆj

∫(

!

x

C

!



c

a

!

!



b

a

a

!



b

! ! A × dS =

∫∫

(D-95)

(

(

)

)

(D-112) (D-113)

334

!

! m ! m−2 ! ∇ • ⎡( r ) A ⎤ = m ( r ) ( r • A ) ! ⎣ ⎦

!

! ! ! ∇• A × r = 0!

(D-115)

! ! dr • ∇ f = df !

!

(

)

!

! df ( r ) ! ∇ • ⎡⎣ f ( r ) r ⎤⎦ = r + 3 f (r ) ! dr

(

!

(

! ! ! ! ! ! ∂B ∂B ∂B ! A • ∇ B = ax + ay + az ∂x ∂y ∂z

!

( A • ∇ ) r! = A !

!

! ! ! dA ! ∇ • A( f ) = • ∇f ! df

!

)

!

! ! ∇ • B = lim

ΔV→0

1 ΔV

! ! ∇× B=

∂ ∂x

∂ ∂y

∂ ! ∂z

bx

by

bz

(D-117)

(D-119) (D-120)

∫∫

(D-121) ! B • nˆ dS !

(D-122)

S

Curl: ! ! ⎡ ∂b ∂by ⎤ ⎡ ∂bx ∂bz ⎤ ˆ ⎡ ∂b y ∂bx ⎤ ˆ ! ! ∇× B= ⎢ z − iˆ + ⎢ − j+⎢ − ⎥ ⎥ k (D-123) ⎣ ∂z ∂x ⎥⎦ ⎣ ∂y ∂z ⎦ ⎣ ∂x ∂y ⎦

(D-124)

!

! ! ! ! ! ! ! ∇ × ( A + B) = ∇ × A + ∇ × B !

(D-125)

!

! ! ! ! ! ! ∇ × ( f A ) = ∇f × A + f ∇ × A !

(D-126)

!

! ! ! ∇ × ∇f = 0 !

(D-127)

!

! ! ! ! ∇ × f ∇g = ∇f × ∇g !

(D-128)

!

! ! ! ∇ × f ∇f = 0 !

(D-129)

!

! ! ! ∇ • ∇f × ∇g = 0 !

(D-130)

!

! ! ! ∇×r = 0!

(D-131)

!

! ! ! ∇ × ( f (r ) r ) = 0 !

(D-132)

!

! ! ! ! ∇ × (A× r ) = 2 A!

(D-133)

!

( A × ∇ ) f = A × ∇f !

(D-118)

)

! !

ˆj

(D-116)

! ! ! ! A • ∇ f = A • ∇f !

!

!





(D-114)

(

)

(

!

)

!

!

!

(D-134)

335

!

! ! ! ! dA ∇ × A ( f ) = ∇f × ! df

!

! ! 1 nˆ • ∇ × B = lim ΔS→0 ΔS

(

)

(D-135)

!∫

!

! ! B • dr !

(D-136)

Gradient operator identities: ! ! ! ! ∇•∇ × A = 0!

! ! ! 1! ! ! ! 2 A × ∇ × A = ∇ ( A) − A • ∇ A ! 2

(D-147)

(D-138)

!

(

! ! ! 1! ! ! ! 2 A × ∇ × A = ∇ ( A) − A ∇ • A ! 2

(D-148)

!

( A × ∇ ) • B = A • (∇ × B )!

(∇ × A ) • B = ∇ • ( A × B ) + A • (∇ × B ) !

(D-139)

!

! ! ! ! ! ! ∇ • f ∇ × A = ∇ × A • ∇f !

(D-140)

!

!

!

!

(

!

) (

!

!

!

)

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ × ( A × B) = ∇ • B A − ∇ • A B + B • ∇ A − A • ∇ B

(

) (

) (

) (

!! !

(D-141)

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ ( A • B) = A • ∇ B + B • ∇ A + A × ∇ × B + B × ∇ × A

(

) (

)

(

)

(

!

)

(

) (

) (

)

(

!

(D-143)

! ! ! ! ! ! ! ! ! ! ∇ × ∇ × A = ∇ × ∇ × A = ∇ ∇ • A − ∇ 2A !

(

)

(

)

)

)

(

(

)

)

(

)

!

!

)

(

!

!

!

!

)

(D-149)

Laplacian:

∂2 ∂2 ∂2 ! + + ∂x 2 ∂y 2 ∂z 2

!

∇2 =

!

∇2 f =

!

! ! ∇ • ∇f = ∇ 2 f !

!

! ! ∇ 2 ∇f = ∇ ∇ 2 f !

(D-142)

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ∇ ( A • B) = ∇ • A B + ∇ • B A + A × ∇ × B + B × ∇ × A

! !

)

!

! !

!

)

!

!

!

(

(D-146)

! ! ! ! ! ! ! ! ! ∇ • ( A × B) = B • ∇ × A − A • ∇ × B !

)

)

(

!

(

(

! ! ! ! ! ! 1! 2 ∇ × A × A = A • ∇ A − ∇ ( A) ! 2

(D-137)

)

! ! ! ! ! ! ! ! ! ! ! ! ∇ • ⎡⎣ A × ( B × C ) ⎤⎦ = ( A • C ) ∇ • B − ( A • B ) ∇ • C ! ! ! ! ! ! ! ! + B • ∇ ( A • C ) − C • ∇ ( A • B ) ! (D-145)

!

C

(

!

∂2 f ∂2 f ∂2 f + + ! ∂x 2 ∂y 2 ∂z 2

( )

(

(D-150)

(D-151) (D-152)

)

(D-153)

(D-144) 336

!

! ! ! ! ∇ • ∇ 2A = ∇ 2 ∇ • A !

(D-154)

Gauss’s theorem:

!

! ! ! ! ! ! ∇ 2( A • r ) = 2 ∇ • A + r • ∇ 2A !

(D-155)

!

(

)

(

)

∫∫∫

! ! ∇ • B dV =

∫∫∫

! ! ∇ • B dV =

V

!

(

)

∇4f = ∇2 ∇2f !

(D-156)

!

! ! ∇ 2 ( f g ) = g∇ 2 f + 2 ∇f • ∇g + f ∇ 2g !

(D-157)

!

∇ 2r m = m ( m + 1) r m−2 !

(D-158)

!

⎛ 1⎞ ∇2⎜ ⎟ = 0 ! ⎝ r⎠

(r ≠ 0) !

!

! ! ! ! ! ! ! ∇ 2A = ∇ ∇ • A − ∇ × ∇ × A !

(D-161)

!

! ! ∇2 r = 0 !

(D-162)

! !

(

(D-160)

)

(

)

m! ∇ 2 ⎡( r ) r ⎤ = 0 ! ⎣ ⎦



⎣(

2⎡

r)

m

! m−2 ! A ⎤ = m ( m + 1) ( r ) A ! ⎦

(D-166)

∫∫

! nˆ × B dS !

(D-167)

! ! B × dS !

(D-168)

S

∫∫∫

! ! ∇ × B dV =

∫∫∫

! ! ∇ × B dV = −

V

!

S

V

∫∫

S

Gradient theorem: !

∫∫∫

! ∇f dV =

V

(D-163) ! (D-164)

∫∫

! B • nˆ dS !

Curl theorem:

(D-159)

∇ r = 6!

(D-165)

S

V

!

!

2 2

!

∫∫

! ! B • dS !

! ∇f = lim

Δ∇→0

∫∫

! f dS !

(D-169)

∫∫

! f dS !

(D-170)

S

1 ΔV

S

337

Stokes’s theorem: !

Green’s second theorem:

∫∫

! ! ! ∇ × B • dS =

∫∫

! ! ∇ × B • nˆ dS =

S

!

"∫



! ! ∇f × dS =

∫∫

∫∫

! nˆ × ∇f dS =

∫∫ ( ∫∫ (

! B • Tˆ ds !

(D-172)

!∫

! f dr !

(D-173)

!

! ! nˆ × ∇ × A dS =

! ! ! ∇f × ∇g • dS = −

)

S

2

S

(D-174)

!

!∫

2

2

C

!

"

"

!

∫∫∫ ( A • ∇ B − B • ∇ A) dV = ∫∫ 2

2

V

(D-175)

C

"∫

!

"∫ g df !

S



∫∫∫

V

(D-176)

C

(D-179)

S

∫∫

S

! ! g ∇f • dr = −

!

∫∫∫ ( p ∇ q − q ∇ p) dV = ∫∫ ( p ∇q − q ∇p) • nˆ dS !

!

! Tˆ × A ds !

dp ⎤ ⎡ dq ⎢⎣ p dn − q dn ⎥⎦ dS ! (D-178)

Green’s third theorem: !

f Tˆ ds !

)

2

V

C

S

!

"∫

!∫

∫∫∫ ( p ∇ q − q ∇ p) dV = ∫∫ V

C

S

!

!

C

S

!

(D-171)

C

S

!

! ! B • dr !

! ! ! ! ! ! ! ⎡ A × ∇ × B + A ∇ • B ⎤ • dS ⎣ ⎦

(

) (

)

! ! ! ! ! ! ! ⎡ B × ∇ × A + B ∇ • A ⎤ • dS ! (D-180) ⎣ ⎦

(

) (

)

! ! ! ! ! ! ! ! ⎡ A • ∇ × ∇ × B − B • ∇ × ∇ × A ⎤ dV = ⎣ ⎦

(

)

∫∫

!

S

(

)

! ! ! ! ! ! ! ⎡ B × ∇ × A − A × ∇ × B ⎤ • dS ! (D-181) ⎣ ⎦

(

)

(

)

Green’s first theorem: !

∫∫∫ ( V

! ! p ∇ q + ∇p • ∇q dV = 2

)

∫∫

S

! ! p ∇q • dS !

Base vectors: !

! ! ∂r ei = i ! ∂x

j ! ∂ x′ ! ei = ej′ ! ∂x i

(D-182)

!

! ∂r ! ! ej′ = ∂ x′ j

∂x i ! ! ej′ = ei ! ∂ x′ j

(D-183)

(D-177)

338

Vector Fields: ! ! ! !

! Solenoidal vector field B : ! ! ! ! ! ∇• B = 0! B = ∇ × A!

(D-184)

! Irrotational vector field B : ! ! ! ! ! ∇ × B = 0! B = ∇f !

(D-185)

339

References

Apostol, T. M., 1969, Calculus: multi variable calculus and linear algebra, with applications to differential equations and probability, 2nd Edition, John Wiley & Sons, New York. Arfken, G. B., and H. J. Weber, 2005, Mathematical methods for physicists, 6th Edition, Academic Press, New York.

Abraham, R., J. E. Marsden, and T. S. Ratin, 1988, Manifolds, tensor analysis, and applications, Springer-Verlag, New York. Abram, J., 1965, Tensor calculus through differential geometry, Butterworths, London. Akivis, M. A., and V. V. Goldberg, 1977, An introduction to linear algebra and tensors, Dover Publications, New York. Akivis, M. A., and V. V. Goldberg, 2003, Tensor calculus with applications, World Scientific, New Jersey. Anton, H., 1992, Calculus, with analytic geometry, 4th Edition, John Wiley & Sons, New York. Apostol, T. M., 1967, Calculus: one-variable calculus, with an introduction to linear algebra, 2nd Edition, John Wiley & Sons, New York.

Aris, R., 1989, Vectors, tensors, and the basic equations of fluid mechanics, Dover Publications, New York. Attenborough, M., 2003, Mathematics for electrical engineering and computing, Newnes, Amsterdam. Barnett, R. A., and J. N. Fujii, 1963, Vectors, John Wiley & Sons, New York. Barr, T. H., 1997, Vector calculus, Prentice Hall, Upper Saddle River, New Jersey. Battaglia, F., and T. F. George, 2013, Tensors: a guide for undergraduate students, American Journal of Physicists, Volume 81, pages 498-511. Bauer, E., 1955, Champs de vecteurs et de tenseurs: introduction à l’électro-magnétisme, Masson et Cie, Paris. Baxandall, P., and H. Liebeck, 1986, Vector calculus, Oxford University Press, Oxford.

340

Becker, R., 1964, Electromagnetic fields and interactions, Volume 1, Electromagnetic theory and relativity, edited by F. Sauter, Blaisdell Publishing Company, New York. Bedford, A., and D. S. Drumheller, 1994, Introduction to elastic wave propagation, John Wiley & Sons, Chichester, England. Bedford, F. W., 1970, Vector calculus, McGraw-Hill, New York. Beju, I., E. Soós, and P. P. Teodorescu, 1983, Euclidean tensor calculus with applications, Abacus Press, Tunbridge Wells, Kent, England. Bergmann, P. G., 1976, Introduction to the theory of relativity, Dover Publications, New York. Bhattacharyya, D., 1920, Vector calculus, Calcutta University Press, Calcutta. Bickley, W. G., and R. E. Gibson, 1962, Via vector to tensor, an introduction to the concepts and techniques of the vector calculus, John Wiley & Sons, New York. Bird, R. B., W. E. Stewart, and E. N. Lightfoot, 2002, Transport Phenomena, 2nd Edition, John Wiley & Sons, New York. Bishop, R. L., and S. I. Goldberg, 1980, Tensor analysis on manifolds, Dover Publications, New York.

Bloch, H. D., 1962, Introduction to tensor analysis, C. E. Merrill Books, Columbus, Ohio. Boas, M. L., 2006, Mathematical methods in the physical sciences, 3rd Edition, John Wiley & Sons, New York. Boast, W. B., 1964, Vector fields: a vector foundation of electric and magnetic fields, Harper & Row Publishers, New York. Boothby, W. M., 1986, An introduction to differentiable manifolds and Riemannian geometry, 2nd Edition, Academic Press, New York. Borisenko, A. I., and I. E. Tarapov, 1979, Vector and tensor analysis with applications, Dover Publications, New York. Bourne, D. E., and P. C. Kendall, 1992, Vector analysis and Cartesian tensors, 3rd Edition, Chapman & Hall, New York. Bowen, R. M., and C.-C. Wang, 1976, Introduction to vectors and tensors, Plenum Press, New York. Brand, L., 1930, Vectorial mechanics, John Wiley & Sons, London. Brand, L., 1947, Vector and tensor analysis, John Wiley & Sons, New York. Brillouin, L., 1964, Tensors in mechanics and elasticity, Academic Press, New York. 341

Bronwell, A., 1953, Advanced mathematics in physics and engineering, McGraw-Hill Book Company, New York.

Chorlton, F., 1976, Vector & tensor methods, Halsted Press, New York.

Budiansky, B., 1983, Tensors, in “Handbook of Applied Mathematics: Selected Results and Methods,” 2nd Edition, edited by C. E. Pearson, pages 179-225, Van Nostrand Reinhold, New York.

Chou, P. C., and N. J. Pagano, 1992, Elasticity: tensor, dyadic, and engineering approaches, Dover Publications, New York.

Butkov, E., 1968, Mathematical physics, Addison-Wesley, Reading, Massachusetts.

Chow, T. L., 2003, Mathematical methods for physicists: a concise introduction, Cambridge University Press, Cambridge.

Byron, Jr., F. W., and R. W. Fuller, 1992, Mathematics of classical and quantum physics, Dover Publications, New York.

Coburn, N., 1955, Vector and tensor analysis, The Macmillan Company, New York.

Cahill, K., 2013, Physical mathematics, Cambridge University Press, Cambridge.

Coffin, J. G., 1911, Vector analysis: an introduction to vectormethods and their various applications to physics and mathematics, 2nd Edition, John Wiley & Sons, New York.

Campbell, J. E., 1926, A course of differential geometry, Oxford University Press, Oxford. Carroll, S., 2004, Spacetime and geometry: an introduction to general relativity, Addison Wesley, San Francisco. Chambers, L. G., 1969, A course in vector analysis, Chapman and Hall, London. Chandrasekharaiah, D. S., and L. Debnath, 1994, Continuum mechanics, Academic Press, New York. Chisholm, J. S. R., 1978, Vectors in three-dimensional space, Cambridge University Press, Cambridge.

Colley, S. J., 1998, Vector calculus, Prentice Hall, Upper Saddle River, New Jersey. Collins, R. E., 1999, Mathematical methods for physicists and engineers, 2nd Edition, Dover Publications, New York. Corwin, L. J., 1979, Calculus in vector spaces, M. Dekker, New York. Coulson, A. E., 1967, An introduction to vectors, Longmans, London.

342

Courant, R., 1988, Differential and integral calculus, Volume II, John Wiley & Sons, New York.

Das, A., 2007, Tensors: the mathematics of relativity theory and continuum mechanics, Springer, New York.

Cox, G. N., and F. J. Germano, 1941, Fluid mechanics, D. Van Nostrand Company, New York.

Davis, H. F., and A. D. Snyder, 2000, Introduction to vector analysis, 7th Edition, Hawkes Publishing.

Craig, H. C., 1943, Vector and tensor analysis, McGraw-Hill Book Co., New York.

Dawber, P. G. 1987, Vectors and vector operators, Adam Hilger, Bristol, England.

Crenshaw, J. W., 1992, Understanding vectors, Embedded Systems Programming, pp. 52-65.

De, U. C., A. A. Shaikh, and J. Sengupta, 2005, Tensor calculus, Alpha Science International Ltd., Harrow, UK.

Crowe, M. J., 1994, A history of vector analysis: the evolution of the idea of a vectorial system, Dover Publications, New York.

Dennery, P., and A. Krzywicki, 1996, Mathematics for physicists, Dover Publications, Mineola, New York.

Crowell, R. H., and R. E. Williamson, 1962, Calculus of vector functions, Prentice-Hall, Englewood Cliffs, New Jersey. Curtis, P. C., 1972, Calculus with an introduction to vectors, John Wiley & Sons, New York. Dalarsson, M., and N. Dalarsson, 2005, Tensor calculus, relativity, and cosmology: a first course, Elsevier Academic Press, Amsterdam. Danielson, D. A., 1992, Vectors and tensors in engineering and physics, Addison-Wesley, Redwood City, California. Darling, R. W. R., 1995, Differential forms and connections, Cambridge University Press, Cambridge.

d’Inverno, R., 1992, Introducing Einstein’s relativity, Oxford University Press, Oxford. Dixon, C., 1971, Applied mathematics of science and engineering, Johne Wiley & Sons, London. Doherty, R. E., and E. G. Keller, 1936, Mathematics of modern engineering, Volume I, John Wiley & Sons, New York. Drew, T. B., 1961, Handbook of vector and polyadic analysis, Reinhold Publishing Corporation, New York. Durrant, A. V., 1996, Vectors in physics and engineering, Chapman & Hall, London. Eisele, J. A., and R. M. Mason, 1970, Applied matrix and tensor analysis, Wiley-Interscience, New York. 343

Eisenhart, L. P., 1909, A treatise on the differential geometry of curves and surfaces, Ginn and Company, Boston. Eisenhart, L. P., 1926, Riemannian geometry, Princeton University Press, Princeton. Eisenhart, L. P., 1947, An introduction to differential geometry, with use of the tensor calculus, Princeton University Press, Princeton. Eliezer, C. J., 1963, Concise vector analysis, Pergamon Press, New York. Ericksen, J. L., 1960, Tensor fields, in “Encyclopedia of Physics,” Volume III/1, Principles of Classical Mechanics and Field Theory, edited by S. Flugge, pages 794-857, SpringerVerlag, Berlin. Eringen, A. C., 1967, Mechanics of continua, John Wiley & Sons, New York. Eringen, A. C., and E. S. Suhubi, 1974a, Elastodynamics, Volume 1, Finite motions, Academic Press, New York. Eringen, A. C., and E. S. Suhubi, 1974b, Elastodynamics, Volume 2, Linear theory, Academic Press, New York. Fadell, A. G., 1968, Vector calculus: and differential equations, Van Nostrand, Princeton.

Faraday, M., 1844 (1952), Experimental researches in electricity: Volume 1, in Great Books of the Western World, Volume 45, edited by R. M. Hutchins, Encyclopedia Britannica, London. Faraday, M., 1847 (1952), Experimental researches in electricity: Volume 2, in Great Books of the Western World, Volume 45, edited by R. M. Hutchins, Encyclopedia Britannica, London. Faraday, M., 1855 (1952), Experimental researches in electricity: Volume 3, in Great Books of the Western World, Volume 45, edited by R. M. Hutchins, Encyclopedia Britannica, London. Felder, G. N., and K. M. Felder, 2016, Mathematical methods in engineering and physics, John Wiley & Sons, Hoboken, New Jersey. Finzi, B., and M. Pastori, 1961, Calcolo tensoriale e applicazioni, Zanichelli, Bologna. Flanders, H., 1989, Differential forms, with applications to the physical sciences, Dover Publications, New York. Fleisch, D. A., 2012, A student’s guide to vectors and tensors, Cambridge University Press, Cambridge. Flügge, W., 1972, Tensor analysis and continuum mechanics, Springer-Verlag, Berlin. 344

Fomenko, A. T., O. V. Manturov, and V. V. Trofimov, 1998, Tensor and vector analysis: geometry, mechanics, and physics, Gordon and Breach Science Publishers, Amsterdam, The Netherlands. Foster, J., and J. D. Nightingale, 1995, A short course in general relativity, 2nd Edition, Springer-Verlag, New York. Frankel, T., 1997, The geometry of physics: an introduction, Cambridge University Press, Cambridge. Fung, Y. C., 1977, A first course in continuum mechanics, 2nd Edition, Prentice-Hall, Englewood Cliffs, New Jersey. Gans, R., 1932, Vector analysis with applications to physics, Blackie & Son, London. Gerretsen, J., 1962, Lectures on tensor calculus and differential geometry, P. Noordhoff, Groningen. Goldberg, S. I., 1998, Curvature and homology, Dover Publications, Mineola, New York. Golub, S., 1974, Tensor calculus, Elsevier Scientific Publishing Company. Goodbody, A. M., 1982, Cartesian tensors: with applications to mechanics, fluid mechanics and elasticity, John Wiley & Sons, New York.

Gooding, D., 1980, Faraday, Thomson, and the concept of the magnetic field, The British Journal for the History of Science, Volume 13, pages 91-120. Graustein, W. C., 1935, Differential geometry, The Macmillan Company, New York. Green, A. E., and W. Zerna, 1992, Theoretical elasticity, 2nd Edition, Dover Publications, New York. Green, Jr., B. A., 1967, Vector calculus, Appleton-CenturyCrofts, New York. Greenberg, M. D., 1998, Advanced engineering mathematics, 2nd Edition, Prentice Hall, Upper Saddle River, New Jersey. Greig, D. M., and T. H. Wise, 1962, Hydrodynamics and vector field theory, Volumes 1 and 2, D. Van Nostrand, Princeton, New Jersey. Grinfeld, P., 2013, Introduction to tensor analysis and the calculus of moving surfaces, Springer, New York. Guggenheimer, H. W., 1977, Differential geometry, Dover Publications, New York. Haas, A. E., 1922, Vektoranalysis in ihren Grundzügen und wichtigsten physikalischen Anwendungen, Walter de Gruyter & Company, Berlin.

345

Hague, B., 1945, An introduction to vector analysis for physicists and engineers, 3rd Edition, Methuen & Company, London.

Hobson, M. P., G. P. Efstathiou, and A. N. Lasenby, 2006, General relativity: an introduction for physicists, Cambridge University Press, Cambridge.

Hakim, R., 1999, An introduction to relativistic gravitation, Cambridge University Press, Cambridge.

Hoffman, B., 1975, About vectors, Dover Publications, New York.

Harper, C., 1976, Introduction to mathematical physics, Prentice-Hall, Englewood Cliffs, New Jersey.

Hooper, D. L. A., 1970, Vectors, Methuen Educational, London.

Hassani, S., 2009, Mathematical methods: for students of physics and related fields, 2nd Edition, Springer-Verlag, New York. Hay, G. E., 1953, Vector and tensor analysis, Dover Publications, New York. Heading, J., 1970, Mathematical methods in science and engineering, 2nd Edition, Edward Arnold, London. Heinbockel, J. H., 2001, Introduction to tensor calculus and continuum mechanics, Trafford Publishing, Victoria, British Columbia. Hess, S., 2015, Tensors for physics, Springer, Heidelberg. Hildebrand, F. B., 1962, Advanced calculus for applications, Prentice-Hall, Englewood Cliffs, New Jersey. Hinchey, F. A., 1976, Vectors and tensors for engineers and scientists, John Wiley & Sons, New York.

Houston, W. V., 1948, Principles of mathematical physics, 2nd Edition, McGraw-Hill Book Company, New York. Hsu, H. P., 1984, Applied vector analysis, Harcourt Brace Jovanovich, New York. Hubbard, J. H., and B. B. Hubbard, 1999, Vector calculus, linear algebra, and differential forms: a unified approach, Prentice Hall, Upper Saddle River, New Jersey. Hughston, L. P., and K. P. Tod, 1990, An introduction to general relativity, Cambridge University Press, Cambridge. Hummel, J. A., 1965, Introduction to vector functions, AddisonWesley Publishing Company, Reading, Massachusetts. Hunter, S. C., 1976, Mechanics of continuous media, John Wiley & Sons, New York. Hyde, E. W., 1890, The directional calculus: based upon the methods of Hermann Grassman, Ginn & Company, Boston. 346

Itskov, M., 2007, Tensor algebra and tensor analysis for engineers: with applications to continuum mechanics, Springer-Verlag, Berlin. Jaeger, J. C., 1956, An introduction to applied mathematics, Oxford University Press, Oxford. Jaeger, L., 1966, Cartesian tensors in engineering science, Pergamon Press, Cambridge, UK. Jänich, K., 2001, Vector analysis, Springer-Verlag, New York. Jaunzemis, W., 1967, Continuum mechanics, The Macmillan Co., New York. Jeffrey, A., 1971, Mathematics for engineers and scientists, Thomas Nelson and Sons, London. Jeffreys, H., 1969, Cartesian tensors, Cambridge University Press, Cambridge. Jeffreys, H., and B. S. Jeffreys, 1956, Methods of mathematical physics, 3rd Edition, Cambridge University Press, Cambridge. Joglekar, S. D., 2005, Mathematical physics,: the basics, Universities Press, Hyderabad, India. Joos, G., 1950, Theoretical physics, 3rd Edition, Hafner Publishing Company, New York.

Juvet, G., 1922, Introduction au calcul tensoriel et au calcul différentiel absolu, Albert Blanchard, Paris. Kaplan, W., 1959, Advanced calculus, Addison-Wesley Publishing Company, Reading, Massachusetts. Karamcheti, K., 1967, Vector analysis and Cartesian tensors, with selected applications, Holden-Day, San Francisco. Kay, D. C., 1988, Theory and problems of tensor calculus, Schaum’s Outline Series, McGraw-Hill, New York. Kejla, F.,1969, Vector algebra, in Survey of Applicable Mathematics, edited by K. Rektorys, pages 263-269, The M.I.T. Press, Cambridge, Massachusetts. Kellogg, O. D., 1953, Foundations of potential theory, Dover Publications, New York. Kemmer, N., 1977, Vector analysis: a physicist’s guide to the mathematics of fields in three dimensions, Cambridge University Press, Cambridge. Khan, Q., 2015, Tensor analysis and its applications, Partridge Publishing, India. Kilmister, C. W., 1973, General theory of relativity, Pergamon Press, Oxford.

347

Kolecki, J. C., 2002, An introduction to tensors for students of physics and engineering, NASA Technical Publication NASA/TP–2002-211716, Cleveland, Ohio.

Krogdahl, W. S., 1978, Tensor analysis: fundamentals and applications, University Press of America, Washington, D. C.

Kolecki, J. C., 2005, Foundations of tensor analysis for students of physics and engineering with an introduction to the theory of relativity, NASA Technical Publication NASA/ TP–2005-213115, Cleveland, Ohio.

Kron, G., 1942, A short course in tensor analysis for electrical engineers, John Wiley & Sons, New York.

Kolsky, H., 1963, Stress waves in solids, Dover Publications, New York. Kopczynski, W., and A. Trautman, 1992, Spacetime and gravitation, John Wiley & Sons, New York. Korn, G. A., and T. M. Korn, 2000, Mathematical handbook for scientists and engineers, Dover Publications, Mineola, New York. Krasnov, M. L., A. I. Kiselev, and G. I. Makarenko, 1983, Vector analysis, 1983, Mir Publishers, Moscow. Kraut, E. A., 1967, Fundamentals of mathematical physics, McGraw-Hill Book Company, New York. Kreyszig, E., 2011, Advanced engineering mathematics, 10th Edition, John Wiley & Sons, New York. Kreyszig, E., 1991, Differential geometry, Dover Publications, New York.

Kusse, B. R., and E. A. Westwig, 2006, Mathematical physics: applied mathematics for scientists and engineers, 2nd Edition,Wiley-VCH, Weinheim. Lambe, C. G., 1969, Applied mathematics for engineers and scientists, The English Universities Press,London. Lass, H., 1950, Vector and tensor analysis, McGraw-Hill Book Company, New York. Lawden, D. F., 1986, An introduction to tensor calculus, relativity and cosmology, 3rd Edition, John Wiley & Sons, New York. Leaton, E. H., 1968, Vectors, John Wiley & Sons, New York. Lebedev, L. P., and M. J. Cloud, 2003, Tensor analysis, World Scientific, New Jersey. Leipholz, H., 1974, Theory of elasticity, Noordhoff International Publications, Leyden. Lennox, S. C., and M. Chadwick, 1987, Mathematics for engineers and applied scientists, Heinemann, London. 348

Levi-Civita, T., 1977, The absolute differential calculus (calculus of tensors), Dover Publications, New York.

Lovelock, D., and H. Rund, 1989, Tensors, differential forms, and variational principles, Dover Publications, New York.

Lewis, P. E., and J. P. Ward, 1989, Vector analysis for engineers and scientists, Addison-Wesley Publishing Company, Reading, Massachusetts.

Low, F. E., 1997, Classical field theory: electromagnetism and gravitation, John Wiley & Sons, New York.

Li, W.-H., and S.-H. Lam, 1964, Principles of fluid mechanics, Addison-Wesley Publishing Company, Reading, Massachusetts. Lichnerowicz, A., 1962, Elements of tensor calculus, Methuen & Company, London. Lindgren, B. W., 1964, Vector calculus, The Macmillan Company, New York. Lindsay, R. B., and H. Margenau, 1981, Foundations of physics, Ox Bow Press, Woodbridge, Connecticut. Long, R. R., 1963, Engineering Science Mechanics, Prentice Hall, Englewood Cliffs, New Jersey. Loomis, L. H., and S. Sternberg, 1990, Advanced calculus, Jones and Bartlett Publishers, Boston. Lopez, R. J., 2001, Advanced engineering mathematics, Addition-Wesley, Boston. Love, A. E. H., 1944, A treatise on the mathematical theory of elasticity, 4th Edition, Dover Publications, New York.

MacBeath, A. M., 1964, Elementary vector algebra, Oxford University Press, Oxford. MacFarlane, A., 1906, Vector analysis and quaternions, 4th Edition, John Wiley & Sons, New York. MacMillan, W. D., 1958, Theoretical mechanics: the theory of the potential, Dover Publications, New York. Malvern, L. E., 1969, Introduction to the mechanics of a continuous medium, Prentice-Hall, Englewood Cliffs, New Jersey. Marder, L., 1972, Vector fields, Allen and Unwin, London. Margenau, H., and G. M. Murphy, 1965, The mathematics of physics and chemistry, 2nd Edition, pages 137-161, D. Van Nostrand Company, New York. Marion, J. B., 1965, Principles of vector analysis, Academic Press, New York. Mariwalla, K. H., 1975, An introduction to vectors, tensors and relativity, Institute of Mathematical Sciences, Madras.

349

Marsden, J. E., and T. J. R. Hughes, 1994, Mathematical foundations of elasticity, Dover Publications, New York. Marsden, J. E., and A. J. Tromba, 1996, Vector calculus, 4th Edition, W. H. Freeman and Company, New York. Marsden, J. E., T. Ratiu, and R. Abraham, 2001, Manifolds, tensor analysis, and applications, 3rd Edition, SpringerVerlag, New York. Mase, G. E., 1970, Theory and problems of continuum mechanics, Schaum’s Outline Series, McGraw-Hill Book Company, New York. Mathews, J., and R. L. Walker, 1964, Mathematical methods of physics, W. A. Benjamin, New York. Mathews, P. C., 1998, Vector calculus, Springer, New York. Maxwell, E. A., 1958, Coordinate geometry: with vectors and tensors, Oxford University Press, Oxford. Maxwell, J. C., 1856, On Faraday’s lines of force, Transactions of the Cambridge Philosophical Society, Volume 10, pages 27-83. Maxwell, J. C., 1861a, On physical lines of force: Part I, The theory of molecular vortices applied to magnetic phenomena, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (The Philosophical Magazine), Volume 21, pages 161-175.

Maxwell, J. C., 1861b, On physical lines of force: Part II, The theory of molecular vortices applied to electric currents, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (The Philosophical Magazine), Volume 21, pages 281-291. Maxwell, J. C., 1862a, On physical lines of force: Part III, The theory of molecular vortices applied to statical electricity, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (The Philosophical Magazine), Volume 23, pages 12-24. Maxwell, J. C., 1862b, On physical lines of force: Part IV, The theory of molecular vortices applied to the action of magnetism on polarized light, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (The Philosophical Magazine), Volume 23, pages 85-95. Maxwell, J. C., 1865, A dynamical theory of the electromagnetic field, Philosophical Transactions of the Royal Society of London, Volume 155, pages 459-512. Maxwell, J. C., 1873, On action at a distance, Proceedings of the Royal Institution of Great Britain, Volume 7, pages 44-54. Maxwell, J. C., 1954, A treatise on electricity and magnetism, 3rd Edition, Volumes 1 and 2, Dover Publications, New York.

350

McCleary, J., 1994, Geometry from a differentiable viewpoint, Cambridge University Press, Cambridge. McConnell, A. J., 1957, Applications of tensor analysis, Dover Publications, New York. McDougle, P., 1971, Vector calculus with vector algebra, Wadsworth Publishing Company, Belmont, California. McKelvey, J. P., and H. Grotch, 1978, Physics for science and engineering, Harper & Row Publishers, New York. McMullin, E., 2002, The origins of the field concept in physics, Physics in Perspective, Volume 4, pages 13-39. McQuistan, R. B., 1965, Scalar and vector fields: a physical interpretation, John Wiley & Sons, New York. McVittie, G. C., 1965, General relativity and cosmology, 2nd Edition, The University of Illinois Press, Urbana. Mercier, J. L., 1971, An introduction to tensor calculus, WoltersNoordhoff, Groningen. Meyer, R. E., 1982, Introduction to mathematical fluid dynamics, Dover Publications, New York. Michal, A. D., 2008, Matrix and tensor calculus, with applications to mechanics, elasticity and aeronautics, Dover publications, Mineola, New York.

Middleman, S., 1998, An introduction to fluid dynamics: principles of analysis and design, John Wiley & Sons, New York. Milewski, E. G., 1995, The vector analysis problem solver, Research and Education Association, Piscataway, New Jersey. Miller, K. S., 1962, A short course in vector analysis, Charles E. Merrill Books, Columbus, Ohio. Milne-Thomson, L. M., 1968, Theoretical hydrodynamics, 5th Edition, The Macmillan Company, New York. Mishra, R. S., 1964, A course in vector algebra and its applications, Prakashan Kendra, Lucknow, India. Misner, C. W., and J. A. Wheeler, 1957, Classical physics as geometry: gravitation, electromagnetism, unquantized charge, and mass as properties of curved empty space, Annals of Physics, Volume 2, pages 525-603. Misner, C. W., K. S. Thorne, and J. A. Wheeler, 1973, Gravitation, W. H. Freeman and Company, New York. Møller, C., 1972, The theory of relativity, 2nd Edition, Oxford University Press, Oxford. Moon, P., and D. E. Spencer, 1965, Vectors, D. Van Nostrand, New York.

351

Morse, P. M., and H. Feshbach, 1953, Methods of theoretical physics, Parts I and II, McGraw-Hill Book Company, New York. Moyer, D. F., 1978, Continuum mechanics and field theory: Thomson and Maxwell, Studies in History and Philosophy of Science, Volume 9, pages 35-50. Munson, B. R., D. F. Young, and T. H. Okiishi, 1998, Fundamentals of fluid mechanics, 3rd Edition, John Wiley & Sons, New York. Murnaghan, F. D., 1922, Vector analysis and the theory of relativity, The Johns Hopkins Press, Baltimore. Myklestad, N. O., 1967, Cartesian tensors: the mathematical language of engineering, Van Nostrand, Princeton, New Jersey. Myškis, A. D., 1972, Introductory mathematics for engineers, Mir Publishers, Moscow. Nadeau, G., 1964, Introduction to elasticity, Holt, Rinehart and Winston, New York. Narasimhan, M. N. L., 1993, Principles of continuum mechanics, John Wiley & Sons, New York. Narayan, S., 1956, A text book of Cartesian tensors: (with an introduction to general tensors), S. Chand, Delhi.

Nelson, E., 1967, Tensor analysis, Princeton University Press, Princeton, New Jersey. Neuenschwander, D. E., 2015, Tensor calculus for physics: a concise guide, John Hopkins University Press, Baltimore. Newell, H. E., 1955, Vector analysis, McGraw-Hill Book Company, New York. Nye, J. F., 1960, Physical properties of crystals: their representation by tensors and matrices, Oxford University Press, Oxford. Oates, G. C., 1983, Vector analysis, in “Handbook of Applied Mathematics: Selected Results and Methods,” 2nd Edition, edited by C. E. Pearson, pages 129-178, Van Nostrand Reinhold, New York. Ockendon, H., and A. B. Tayler, 1983, Inviscid Fluid Flows, Springer-Verlag, Berlin. Ogawa, A., 1993, Vortex flows, CRC Press, Boca Raton, Florida. O’Hanion, H. C., and R. Ruffini, 1994, Gravitation and spacetime, 2nd Edition, W. W. Norton & Company, New York. O’Neill, B., 1966, Elementary differential geometry, Academic Press, New York.

352

O'Neil, P. V., 2007, Advanced engineering mathematics, Thomson, Toronto.

Pearson, C. E., 1959, Theoretical elasticity, Harvard University Press, Cambridge, Massachusetts.

O’Neill, M. E., and F. Chorlton, 1986, Ideal and incompressible fluid dynamics, Ellis Horwood, Chichester.

Pettofrezzo, A. J., 1966, Vectors and their applications, PrenticeHall, Englewood Cliffs, New Jersey.

Panton, R. L., 1984, Incompressible flow, John Wiley & Sons, New York.

Phillips, H. B., 1959, Vector analysis, John Wiley & Sons, New York.

Papapetrou, A., 1974, Lectures on general relativity, D. Reidel Publishing Company, Dordrecht, The Netherlands.

Pipes, L. A., 1946, Applied mathematics for engineers and physicists, McGraw-Hill Book Company, New York.

Papastavridis, J. G., 1999, Tensor calculus and analytical dynamics, CRC Press, Boca Raton, Florida.

Planck, M., 1957, The mechanics of deformable bodies: introduction to theoretical physics, Volume II, 3rd Edition, The Macmillan Company, New York.

Parton, V. Z., and P. I. Perlin, 1984, Mathematical methods of the theory of elasticity, Volume 1, Mir Publishers, Moscow. Pathria, R. K., 2003, The theory of relativity, 2nd Edition, Dover Publications, New York. Pauli, W., 1981, Theory of relativity, Dover Publications, New York. Pawlikowski, G. J., 1994, The men responsible for the development of vectors, in From Five Fingers to Infinity: A Journey through the History of Mathematics, edited by F. J. Swetz, Open Court Publishing Company, Chicago, pages 621-624.

Poisson, S. D., 1813, Remarques sur une équation qui se présente dans la théorie de l’attraction des spheroïdes, Nouveau Bulletin par la Société philomathique de Paris, Volume 3, pages 388-392. Poisson, S. D., 1829a, Mémoire sur l’équilibre et le mouvement des corps élastiques, Mémoires de l’Academie des Sciences, Paris, Volume 8, pages 357-570. Poisson, S. D., 1829b, Addition, Mémoires de l’Academie des Sciences, Paris, Volume 8, pages 623-627. Queen, N. M., 1967, Vector analysis, McGraw-Hill, New York.

353

Rahman, M., and I. Mulolani, 2001, Applied vector analysis, CRC Press, Boca Raton, Florida.

Rainich, G. Y., 1932, Mathematics of relativity: lecture notes, Edwards Brothers, Ann Arbor, Michigan.

Riemann, B., 1953, Ueber die Gesetze der Vertheilung von Spannungselectricität in ponderabeln Körpern, wenn diese nicht als vollkommene Leiter oder Nichtleiter, sondern als dem Enthalten von Spannungselectricität mit endlicher Kraft widerstrebend, in Gesammelte mathematische Werke, 2nd Edition, edited by H. Weber, pages 49-54, Dover Publications, New York.

Ramsey, A. S., 1964, An introduction to the theory of Newtonian attraction, Cambridge University Press, Cambridge.

Riemann, B., 1953, ein Beitrag zur Electrodynamik, in Gesammelte mathematische Werke, 2nd Edition, edited by H. Weber, pages 288-293, Dover Publications, New York.

Reddick, H. W., and F. H. Miller, 1955, Advanced mathematics for engineers, 3rd Edition, John Wiley & Sons, New York.

Riley, K. F., M. P. Hobson, and S. J. Bence, 2006, Mathematical methods for physics and engineering, 3rd Edition, Cambridge University Press, Cambridge.

Rahman, M., 2007, Advanced vector analysis for scientists and engineers, WIT Press, Southampton, UK.

Rektorys, K.,1969, Vector analysis, in Survey of Applicable Mathematics, edited by K. Rektorys, pages 269-279, The M.I.T. Press, Cambridge, Massachusetts. Riemann, B., 1953, Neue mathematische Prinzipien der Naturphilosophie, in Gesammelte mathematische Werke, 2nd Edition, edited by H. Weber, pages 526-538, Dover Publications, New York. Riemann, B., 1953, Neue Theorie des Rückstandes in electrischen Bindungsapparaten, in Gesammelte mathematische Werke, 2nd Edition, edited by H. Weber, pages 367-378, Dover Publications, New York.

Rindler, W., 2006, Relativity: special, general, and cosmological, 2nd Edition, Oxford University Press, Oxford. Robertson, J. M., 1965, Hydrodynamics in theory and application, Prentice-Hall, Englewood Cliffs, New Jersey. Rose, I. H., 1968, Vectors and analytic geometry, Prindle, Weber & Schmidt, Boston. Rouse, H., 1978, Elementary mechanics of fluids, Dover Publications, New York. Rucker, R. von B., 1977, Geometry, relativity and the fourth dimension, Dover Publications, New York. 354

Ruíz-Tolosa, J. Ramón, and E. Castillo, 2005, From vectors to tensors, Springer-Verlag, Berlin. Runge, C., 1923, Vector analysis, E. P. Dutton and Company, Publishers, New York. Rutherford, D. E., 1957, Vector methods, applied to differential geometry, mechanics, and potential theory, Interscience Publishers, New York. Ryder, L., 2009, Introduction to general relativity, Cambridge University Press, Cambridge. Sands, D. E., 1995, Vectors and tensors in crystallography, Dover Publications, New York. Schelkunoff, S. A., 1965, Applied mathematics for engineers and scientists, D. Van Nostrand Company, Princeton, New Jersey. Schey, H. M., 1973, Div, grad, curl, and all that, W. W. Norton & Company, New York. Schouten J. A., 1989, Tensor analysis for physicists, 2nd Edition, Dover Publications, New York. Schuster, S., 1962, Elementary vector geometry, John Wiley & Sons, New York. Schutz, B. F., 2009, A first course in general relativity, 2nd Edition, Cambridge University Press, Cambridge.

Schwartz, M., S. Green, and W. A. Rutledge, 1960, Vector analysis with applications to geometry and physics, Harper & Brothers, New York. Segel, L. A., 1987, Mathematics applied to continuum mechanics, Dover Publications, New York. Serrin, J., 1959, Mathematical principles of classical fluid mechanics, in Handbuch der Physik, Volume VIII/1, edited by S. Flügge and C. Truesdell, pages 125-263, Springer-Verlag, Berlin. Shames, I. H., 1962, Mechanics of fluids, McGraw-Hill Book Company, New York. Shaw, J. B., 1922, Vector calculus with applications to physics, D. Van Nostrand Company, New York. Sheppard, W. F., 1923, From determinant to tensor, Oxford University Press, Oxford. Shercliff, J. A., 1977, Vector fields: vector analysis developed through its application to engineering and physics, Cambridge University Press, Cambridge. Shima, H., and T. Nakayama, 2010, Higher mathematics for physics and engineering, Springer, Heidelberg. Shorter, L. R., 1961, Problems and worked solutions in vector analysis, Dover Publications, New York.

355

Silberstein, L., 1913, Vectorial mechanics, Macmillan and Company, London. Silberstein, L., 1919, Elements of vector algebra, Longmans, Green and Company, London. Simmonds, J. G., 1982, A brief on tensor analysis, SpringerVerlag, New York. Simons, S., 1970, Vector analysis for mathematics, scientists and engineers, 2nd Edition, Pergamon Press, New York. Skinner, R., 1982, Relativity for scientists and engineers, Dover Publications, New York. Smith, D. W., and K. K. Risch, 1999, Vectors, Pocket Books, New York. Smith, G. D., 1962, Vector analysis, including the dynamics of a rigid body, Oxford University Press, New York.

Snieder, R., 2004, A guided tour of mathematical methods for the physical sciences, Cambridge University Press, Cambridge. Sokolnikoff, I. S., 1956, Mathematical theory of elasticity, 2nd Edition, McGraw-Hill Book Company, New York. Sokolnikoff, I. S., 1964, Tensor analysis, theory and applications to geometry and mechanics of continua, 2nd Edition, John Wiley & Sons, New York. Sokolnikoff, I. S., and R. M. Redheffer, 1966, Mathematics of physics and modern engineering, 2nd Edition, McGrawHill Book Company, New York. Sommerfeld, A., 1964, Mechanics of deformable bodies: lectures on theoretical physics, Volume II, Academic Press, New York.

Smith, L. P., 1961, Mathematical methods for scientists and engineers, Dover Publications, New York.

Southwell, R. V., 1969, An introduction to the theory of elasticity: for engineers and physicists, Dover Publications, New York.

Smith, M. S., 1963, Principles & applications of tensor analysis, H. W. Sams, Indianapolis.

Sowerby, L., 1974, Vector field theory with applications, Longman, New York.

Sneddon, I. N., and D. S. Berry, 1958, The classical theory of elasticity, in Handbuch der Physik, Volume VI, edited by S. Flügge, pages 1-126, Springer-Verlag, Berlin.

Spain, B., 1960, Tensor calculus, 3rd Edition, Oliver and Boyd, Edinburgh. Spain, B., 1967, Vector analysis, 2nd Edition, D. Van Nostrand Company, London. 356

Spiegel, M. R., 1971, Advanced mathematics for engineers and scientists, Schaum’s Outline Series, McGraw-Hill, New York.

Stone, M., and P. Goldbart, 2008, Mathematics for physics: a guided tour for graduate students, Pimander-Casaubon, London.

Spiegel, M. R., 1991, Theory and problems of vector analysis and an introduction to tensor analysis, Schaum’s Outline Series, McGraw-Hill, New York.

Stroud, K. A., and D. J. Booth, 2003, Advanced engineering mathematics: 4th Edition, Palgrave Macmillan, London.

Springer, C. E., 1962, Tensor and vector analysis with applications to differential analysis, The Ronald Press Co., New York. Staff of Research and Education Association, 1984, The vector analysis problem solver, Research and Education Association, Piscataway, New Jersey. Stein, F. M., 1963, An introduction to vector analysis, Harper and Row, New York.

Stroud, K. A., and D. J. Booth, 2005, Vector analysis, Industrial Press, New York. Symon, K. R., 1960, Mechanics, 2nd Edition, Addison-Wesley Publishing Company, Reading, Massachusetts. Synge, J. L., and A. Schild, 1978, Tensor calculus, Dover Publications, New York. Synge, J. L., 1964, Relativity: the general theory, North-Holland Publishing Company, Amsterdam.

Stephani, H., 1990, General relativity: an introduction to the theory of the gravitational field, 2nd Edition, edited by J. Stewart, Cambridge University Press, Cambridge.

Synge, J. L., 1964, Introduction to general relativity, in Relativity, Groups and Topology, edited by C. DeWitt and B. S. DeWitt, pages 1-88, Gordon and Breach, Science Publishers, New York.

Stephenson, G., and P. M. Radmore, 1990, Advanced mathematical methods for engineering and science students, Cambridge University Press, Cambridge.

Tai, C. T., 1992, Generalized vector and dyadic analysis: applied mathematics in field theory, IEEE Press, New York.

Straumann, N., 1984, General relativity and relativistic astrophysics, Springer-Verlag, Berlin.

Tallack, J. C., 1966, Introduction to elementary vector analysis, Cambridge University Press, Cambridge.

357

Tang, K. T., 2007, Mathematical methods for engineers and scientists, Volume 2, Vector analysis, ordinary differential equations and Laplace transforms, Springer-Verlag, Berlin.

Tiller, F. M., 1963, Introduction to vector analysis: vector algebra, solid analytic geometry, and vector calculus, The Ronald Press Company, New York.

Taylor, A. E., and W. R. Mann, 1983, Advanced calculus, 3rd Edition, John Wiley & Sons, New York.

Timoshenko, S. P., and J. N. Goodier, 1970, Theory of elasticity, 3rd Edition, McGraw-Hill Book Company, New York.

Taylor, J. H., 1939, Vector analysis, with an introduction to tensor analysis, Prentice-Hall, New York.

Tolman, R. C., 1987, Relativity, thermodynamics, and cosmology, Dover Publications, New York.

Teichmann, H., 1969, Physical applications of vectors and tensors, Harrap, London.

Torretti, R., 1996, Relativity and geometry, Dover Publications, New York.

Temple, G., 1960, Cartesian tensors, an introduction, Methuen & Company, London.

Tourrenc, P., 1997, Relativity and gravitation, Cambridge University Press, Cambridge.

Thomas, Jr., G. B., and R. L. Finney, 1988, Calculus and analytic geometry, 7th Edition, Addison-Wesley Publishing Company, Reading, Massachusetts.

Triebel, H., 1986, Analysis and mathematical physics, D. Reidel Publishing Company, Dordrecht, Holland.

Thomas, T. Y., 1931, The elementary theory of tensors with applications to geometry and mechanics, McGraw-Hill Book Company, New York.

Truesdell, C., 1954, The kinematics of vorticity, Indiana University Publications Science Series No. 19, Indiana University Press, Bloomington, Indiana.

Thomas, T. Y., 1961, Concepts from tensor analysis and differential geometry, Academic Press, New York.

Truesdell, C., and R. A. Toupin, 1960, The classical field theories, in “Encyclopedia of Physics,” Volume III/1, edited by S. Flugge, pages 226-793, Springer-Verlag, Berlin.

Thorpe, J. A., 1979, Elementary topics in differential geometry, Springer-Verlag, New York.

Tyldesley, J. R., 1975, An introduction to tensor analysis for engineers and applied scientists, Longman, London.

358

Urwin, K. M., 1966, Advanced calculus and vector field theory, Pergamon Press, New York. Valentiner, S., 1907, Vektoranalysis, G. J. Göschen’sche Verlagshandlung, Leipzig. Vaughn, M. T., 2007, Introduction to mathematical physics, Wiley-VCH Verlag GmbH & Company, Weinheim. Vilhelm, V.,1969, Tensor calculus, in Survey of Applicable Mathematics, edited by K. Rektorys, pages 280-297, The M.I.T. Press, Cambridge, Massachusetts. Visconti, A., 1992, Introductory differential geometry for physicists, World Scientific, New Jersey. Wald, R. M., 1984, General relativity, The University of Chicago Press, Chicago. Walshaw, A. C., and D. A. Jobson, 1979, Mechanics of fluids, 3rd Edition, Longman, London. Wasley, R. J., 1973, Stress wave propagation in solids: an introduction, Marcel Dekker, New York. Weatherburn, C. E., 1921, Elementary vector analysis, with application to geometry and physics, G. Bell and Sons, London.

Weatherburn, C. E., 1962, Advanced vector analysis, with application to mathematical physics, G. Bell and Sons, London. Weatherburn, C. E., 1966, An introduction to Riemannian geometry and the tensor calculus, Cambridge University Press, Cambridge. Weinreich, G., 1998, Geometrical vectors, The University of Chicago Press, Chicago. Widder, D. V., 1989, Advanced calculus, 2nd Edition, Dover Publications, New York. Williams, L. P., 1966, The origins of field theory, Random House, New York. Williamson, R. E., R. H. Crowell, and H. F. Trotter, 1972, Calculus of vector functions, Third Edition, Prentice-Hall, Englewood Cliffs, New Jersey. Wills, A. P., 1958, Vector analysis with an introduction to tensor analysis, Dover Publications, New York. Wilson, E. B., 1929, Vector analysis: a text-book for the use of students of mathematics and physics founded upon the lectures of J. Willard Gibbs, Yale University Press, New Haven. Wolstenholme, E. Œ., 1964, Elementary vectors, Pergamon Press, Oxford. 359

Wong, C. W., 1991, Introduction to mathematical physics: methods and concepts, Oxford University Press, New York. Wrede, R. C., 1972, Introduction to vector and tensor analysis, Dover Publications, New York. Wrede, R. C., and M. R. Spiegel, 2010, Advanced calculus, 3rd Edition, Schaum’s Outline Series, McGraw-Hill, New York. Wylie, C. R., and L. C. Barrett, 1982, Advanced engineering mathematics, 5th Edition, McGraw-Hill Book Co., New York. Young, E. C., 1978, Vector and tensor analysis, M. Dekker, New York. Yuan, S. W., 1967, Foundations of fluid mechanics, PrenticeHall, Englewood Cliffs, New Jersey. Zeldovich, Ya. B., and Myškis, 1976, Elements of applied mathematics, Mir Publishers, Moscow. Zvi, E. Ben, 1965, Introduction to theory of tensors, University of Kentucky, Lexington.

360