Docsity
Docsity

Prepare-se para as provas
Prepare-se para as provas

Estude fácil! Tem muito documento disponível na Docsity


Ganhe pontos para baixar
Ganhe pontos para baixar

Ganhe pontos ajudando outros esrudantes ou compre um plano Premium


Guias e Dicas
Guias e Dicas

Heinbockel - Introduction To Tensor Calculus And Continuum Mechanics, Manuais, Projetos, Pesquisas de Atualidades

Excelente livro para quem quer aprender cálculo tensorial e notação indexal

Tipologia: Manuais, Projetos, Pesquisas

2010

Compartilhado em 13/11/2010

cicero-thiago-dirac-ii-4
cicero-thiago-dirac-ii-4 🇵🇹

4 documentos

1 / 373

Documentos relacionados


Pré-visualização parcial do texto

Baixe Heinbockel - Introduction To Tensor Calculus And Continuum Mechanics e outras Manuais, Projetos, Pesquisas em PDF para Atualidades, somente na Docsity! Introduction to Tensor Calculus and Continuum Mechanics by J.H. Heinbockel Department of Mathematics and Statistics Old Dominion University PREFACE This is an introductory text which presents fundamental concepts from the subject areas of tensor calculus, differential geometry and continuum mechanics. The material presented is suitable for a two semester course in applied mathematics and is flexible enough to be presented to either upper level undergraduate or beginning graduate students majoring in applied mathematics, engineering or physics. The presentation assumes the students have some knowledge from the areas of matrix theory, linear algebra and advanced calculus. Each section includes many illustrative worked examples. At the end of each section there is a large collection of exercises which range in difficulty. Many new ideas are presented in the exercises and so the students should be encouraged to read all the exercises. The purpose of preparing these notes is to condense into an introductory text the basic definitions and techniques arising in tensor calculus, differential geometry and continuum mechanics. In particular, the material is presented to (i) develop a physical understanding of the mathematical concepts associated with tensor calculus and (ii) develop the basic equations of tensor calculus, differential geometry and continuum mechanics which arise in engineering applications. From these basic equations one can go on to develop more sophisticated models of applied mathematics. The material is presented in an informal manner and uses mathematics which minimizes excessive formalism. The material has been divided into two parts. The first part deals with an introduc- tion to tensor calculus and differential geometry which covers such things as the indicial notation, tensor algebra, covariant differentiation, dual tensors, bilinear and multilinear forms, special tensors, the Riemann Christoffel tensor, space curves, surface curves, cur- vature and fundamental quadratic forms. The second part emphasizes the application of tensor algebra and calculus to a wide variety of applied areas from engineering and physics. The selected applications are from the areas of dynamics, elasticity, fluids and electromag- netic theory. The continuum mechanics portion focuses on an introduction of the basic concepts from linear elasticity and fluids. The Appendix A contains units of measurements from the Système International d’Unitès along with some selected physical constants. The Appendix B contains a listing of Christoffel symbols of the second kind associated with various coordinate systems. The Appendix C is a summary of useful vector identities. J.H. Heinbockel, 1996 1 PART 1: INTRODUCTION TO TENSOR CALCULUS A scalar field describes a one-to-one correspondence between a single scalar number and a point. An n- dimensional vector field is described by a one-to-one correspondence between n-numbers and a point. Let us generalize these concepts by assigning n-squared numbers to a single point or n-cubed numbers to a single point. When these numbers obey certain transformation laws they become examples of tensor fields. In general, scalar fields are referred to as tensor fields of rank or order zero whereas vector fields are called tensor fields of rank or order one. Closely associated with tensor calculus is the indicial or index notation. In section 1 the indicial notation is defined and illustrated. We also define and investigate scalar, vector and tensor fields when they are subjected to various coordinate transformations. It turns out that tensors have certain properties which are independent of the coordinate system used to describe the tensor. Because of these useful properties, we can use tensors to represent various fundamental laws occurring in physics, engineering, science and mathematics. These representations are extremely useful as they are independent of the coordinate systems considered. §1.1 INDEX NOTATION Two vectors A and B can be expressed in the component form A = A1 ê1 +A2 ê2 +A3 ê3 and B = B1 ê1 +B2 ê2 +B3 ê3, where ê1, ê2 and ê3 are orthogonal unit basis vectors. Often when no confusion arises, the vectors A and B are expressed for brevity sake as number triples. For example, we can write A = (A1, A2, A3) and B = (B1, B2, B3) where it is understood that only the components of the vectors A and B are given. The unit vectors would be represented ê1 = (1, 0, 0), ê2 = (0, 1, 0), ê3 = (0, 0, 1). A still shorter notation, depicting the vectors A and B is the index or indicial notation. In the index notation, the quantities Ai, i = 1, 2, 3 and Bp, p = 1, 2, 3 represent the components of the vectors A and B. This notation focuses attention only on the components of the vectors and employs a dummy subscript whose range over the integers is specified. The symbol Ai refers to all of the components of the vector A simultaneously. The dummy subscript i can have any of the integer values 1, 2 or 3. For i = 1 we focus attention on the A1 component of the vector A. Setting i = 2 focuses attention on the second component A2 of the vector A and similarly when i = 3 we can focus attention on the third component of A. The subscript i is a dummy subscript and may be replaced by another letter, say p, so long as one specifies the integer values that this dummy subscript can have. 2 It is also convenient at this time to mention that higher dimensional vectors may be defined as ordered n−tuples. For example, the vector X = (X1, X2, . . . , XN ) with components Xi, i = 1, 2, . . . , N is called a N−dimensional vector. Another notation used to represent this vector is X = X1 ê1 +X2 ê2 + · · ·+XN êN where ê1, ê2, . . . , êN are linearly independent unit base vectors. Note that many of the operations that occur in the use of the index notation apply not only for three dimensional vectors, but also for N−dimensional vectors. In future sections it is necessary to define quantities which can be represented by a letter with subscripts or superscripts attached. Such quantities are referred to as systems. When these quantities obey certain transformation laws they are referred to as tensor systems. For example, quantities like Akij e ijk δij δ j i A i Bj aij . The subscripts or superscripts are referred to as indices or suffixes. When such quantities arise, the indices must conform to the following rules: 1. They are lower case Latin or Greek letters. 2. The letters at the end of the alphabet (u, v, w, x, y, z) are never employed as indices. The number of subscripts and superscripts determines the order of the system. A system with one index is a first order system. A system with two indices is called a second order system. In general, a system with N indices is called a Nth order system. A system with no indices is called a scalar or zeroth order system. The type of system depends upon the number of subscripts or superscripts occurring in an expression. For example, Aijk and B m st , (all indices range 1 to N), are of the same type because they have the same number of subscripts and superscripts. In contrast, the systems Aijk and C mn p are not of the same type because one system has two superscripts and the other system has only one superscript. For certain systems the number of subscripts and superscripts is important. In other systems it is not of importance. The meaning and importance attached to sub- and superscripts will be addressed later in this section. In the use of superscripts one must not confuse “powers ”of a quantity with the superscripts. For example, if we replace the independent variables (x, y, z) by the symbols (x1, x2, x3), then we are letting y = x2 where x2 is a variable and not x raised to a power. Similarly, the substitution z = x3 is the replacement of z by the variable x3 and this should not be confused with x raised to a power. In order to write a superscript quantity to a power, use parentheses. For example, (x2)3 is the variable x2 cubed. One of the reasons for introducing the superscript variables is that many equations of mathematics and physics can be made to take on a concise and compact form. There is a range convention associated with the indices. This convention states that whenever there is an expression where the indices occur unrepeated it is to be understood that each of the subscripts or superscripts can take on any of the integer values 1, 2, . . . , N where N is a specified integer. For example, 3 the Kronecker delta symbol δij , defined by δij = 1 if i = j and δij = 0 for i = j, with i, j ranging over the values 1,2,3, represents the 9 quantities δ11 = 1 δ21 = 0 δ31 = 0 δ12 = 0 δ22 = 1 δ32 = 0 δ13 = 0 δ23 = 0 δ33 = 1. The symbol δij refers to all of the components of the system simultaneously. As another example, consider the equation êm · ên = δmn m,n = 1, 2, 3 (1.1.1) the subscripts m, n occur unrepeated on the left side of the equation and hence must also occur on the right hand side of the equation. These indices are called “free ”indices and can take on any of the values 1, 2 or 3 as specified by the range. Since there are three choices for the value for m and three choices for a value of n we find that equation (1.1.1) represents nine equations simultaneously. These nine equations are ê1 · ê1 = 1 ê2 · ê1 = 0 ê3 · ê1 = 0 ê1 · ê2 = 0 ê2 · ê2 = 1 ê3 · ê2 = 0 ê1 · ê3 = 0 ê2 · ê3 = 0 ê3 · ê3 = 1. Symmetric and Skew-Symmetric Systems A system defined by subscripts and superscripts ranging over a set of values is said to be symmetric in two of its indices if the components are unchanged when the indices are interchanged. For example, the third order system Tijk is symmetric in the indices i and k if Tijk = Tkji for all values of i, j and k. A system defined by subscripts and superscripts is said to be skew-symmetric in two of its indices if the components change sign when the indices are interchanged. For example, the fourth order system Tijkl is skew-symmetric in the indices i and l if Tijkl = −Tljki for all values of ijk and l. As another example, consider the third order system aprs, p, r, s = 1, 2, 3 which is completely skew- symmetric in all of its indices. We would then have aprs = −apsr = aspr = −asrp = arsp = −arps. It is left as an exercise to show this completely skew- symmetric systems has 27 elements, 21 of which are zero. The 6 nonzero elements are all related to one another thru the above equations when (p, r, s) = (1, 2, 3). This is expressed as saying that the above system has only one independent component. 6 Addition, Multiplication and Contraction The algebraic operation of addition or subtraction applies to systems of the same type and order. That is we can add or subtract like components in systems. For example, the sum of Aijk and B i jk is again a system of the same type and is denoted by Cijk = A i jk +B i jk, where like components are added. The product of two systems is obtained by multiplying each component of the first system with each component of the second system. Such a product is called an outer product. The order of the resulting product system is the sum of the orders of the two systems involved in forming the product. For example, if Aij is a second order system and B mnl is a third order system, with all indices having the range 1 to N, then the product system is fifth order and is denoted Cimnlj = A i jB mnl. The product system represents N5 terms constructed from all possible products of the components from Aij with the components from B mnl. The operation of contraction occurs when a lower index is set equal to an upper index and the summation convention is invoked. For example, if we have a fifth order system Cimnlj and we set i = j and sum, then we form the system Cmnl = Cjmnlj = C 1mnl 1 + C 2mnl 2 + · · ·+ CNmnlN . Here the symbol Cmnl is used to represent the third order system that results when the contraction is performed. Whenever a contraction is performed, the resulting system is always of order 2 less than the original system. Under certain special conditions it is permissible to perform a contraction on two lower case indices. These special conditions will be considered later in the section. The above operations will be more formally defined after we have explained what tensors are. The e-permutation symbol and Kronecker delta Two symbols that are used quite frequently with the indicial notation are the e-permutation symbol and the Kronecker delta. The e-permutation symbol is sometimes referred to as the alternating tensor. The e-permutation symbol, as the name suggests, deals with permutations. A permutation is an arrangement of things. When the order of the arrangement is changed, a new permutation results. A transposition is an interchange of two consecutive terms in an arrangement. As an example, let us change the digits 1 2 3 to 3 2 1 by making a sequence of transpositions. Starting with the digits in the order 1 2 3 we interchange 2 and 3 (first transposition) to obtain 1 3 2. Next, interchange the digits 1 and 3 ( second transposition) to obtain 3 1 2. Finally, interchange the digits 1 and 2 (third transposition) to achieve 3 2 1. Here the total number of transpositions of 1 2 3 to 3 2 1 is three, an odd number. Other transpositions of 1 2 3 to 3 2 1 can also be written. However, these are also an odd number of transpositions. 7 EXAMPLE 1.1-4. The total number of possible ways of arranging the digits 1 2 3 is six. We have three choices for the first digit. Having chosen the first digit, there are only two choices left for the second digit. Hence the remaining number is for the last digit. The product (3)(2)(1) = 3! = 6 is the number of permutations of the digits 1, 2 and 3. These six permutations are 1 2 3 even permutation 1 3 2 odd permutation 3 1 2 even permutation 3 2 1 odd permutation 2 3 1 even permutation 2 1 3 odd permutation. Here a permutation of 1 2 3 is called even or odd depending upon whether there is an even or odd number of transpositions of the digits. A mnemonic device to remember the even and odd permutations of 123 is illustrated in the figure 1.1-1. Note that even permutations of 123 are obtained by selecting any three consecutive numbers from the sequence 123123 and the odd permutations result by selecting any three consecutive numbers from the sequence 321321. Figure 1.1-1. Permutations of 123. In general, the number of permutations of n things taken m at a time is given by the relation P (n,m) = n(n− 1)(n− 2) · · · (n−m+ 1). By selecting a subset of m objects from a collection of n objects, m ≤ n, without regard to the ordering is called a combination of n objects taken m at a time. For example, combinations of 3 numbers taken from the set {1, 2, 3, 4} are (123), (124), (134), (234). Note that ordering of a combination is not considered. That is, the permutations (123), (132), (231), (213), (312), (321) are considered equal. In general, the number of combinations of n objects taken m at a time is given by C(n,m) = ( n m ) = n! m!(n−m)! where ( n m ) are the binomial coefficients which occur in the expansion (a+ b)n = n∑ m=0 ( n m ) an−mbm. 8 The definition of permutations can be used to define the e-permutation symbol. Definition: (e-Permutation symbol or alternating tensor) The e-permutation symbol is defined eijk...l = eijk...l =  1 if ijk . . . l is an even permutation of the integers 123 . . . n −1 if ijk . . . l is an odd permutation of the integers 123 . . . n 0 in all other cases EXAMPLE 1.1-5. Find e612453. Solution: To determine whether 612453 is an even or odd permutation of 123456 we write down the given numbers and below them we write the integers 1 through 6. Like numbers are then connected by a line and we obtain figure 1.1-2. Figure 1.1-2. Permutations of 123456. In figure 1.1-2, there are seven intersections of the lines connecting like numbers. The number of intersections is an odd number and shows that an odd number of transpositions must be performed. These results imply e612453 = −1. Another definition used quite frequently in the representation of mathematical and engineering quantities is the Kronecker delta which we now define in terms of both subscripts and superscripts. Definition: (Kronecker delta) The Kronecker delta is defined: δij = δ j i = { 1 if i equals j 0 if i is different from j 11 more general results. Consider (p, q, r) as some permutation of the integers (1, 2, 3), and observe that the determinant can be expressed ∆ = ∣∣∣∣∣∣ ap1 ap2 ap3 aq1 aq2 aq3 ar1 ar2 ar3 ∣∣∣∣∣∣ = eijkapiaqjark. If (p, q, r) is an even permutation of (1, 2, 3) then ∆ = |A| If (p, q, r) is an odd permutation of (1, 2, 3) then ∆ = −|A| If (p, q, r) is not a permutation of (1, 2, 3) then ∆ = 0. We can then write eijkapiaqjark = epqr|A|. Each of the above results can be verified by performing the indicated summations. A more formal proof of the above result is given in EXAMPLE 1.1-25, later in this section. EXAMPLE 1.1-10. The expression eijkBijCi is meaningless since the index i repeats itself more than twice and the summation convention does not allow this. If you really did want to sum over an index which occurs more than twice, then one must use a summation sign. For example the above expression would be written n∑ i=1 eijkBijCi. EXAMPLE 1.1-11. The cross product of the unit vectors ê1, ê2, ê3 can be represented in the index notation by êi × êj =  êk if (i, j, k) is an even permutation of (1, 2, 3) − êk if (i, j, k) is an odd permutation of (1, 2, 3) 0 in all other cases This result can be written in the form êi× êj = ekij êk. This later result can be verified by summing on the index k and writing out all 9 possible combinations for i and j. EXAMPLE 1.1-12. Given the vectors Ap, p = 1, 2, 3 and Bp, p = 1, 2, 3 the cross product of these two vectors is a vector Cp, p = 1, 2, 3 with components Ci = eijkAjBk, i, j, k = 1, 2, 3. (1.1.2) The quantities Ci represent the components of the cross product vector C = A× B = C1 ê1 + C2 ê2 + C3 ê3. The equation (1.1.2), which defines the components of C, is to be summed over each of the indices which repeats itself. We have summing on the index k Ci = eij1AjB1 + eij2AjB2 + eij3AjB3. (1.1.3) 12 We next sum on the index j which repeats itself in each term of equation (1.1.3). This gives Ci = ei11A1B1 + ei21A2B1 + ei31A3B1 + ei12A1B2 + ei22A2B2 + ei32A3B2 + ei13A1B3 + ei23A2B3 + ei33A3B3. (1.1.4) Now we are left with i being a free index which can have any of the values of 1, 2 or 3. Letting i = 1, then letting i = 2, and finally letting i = 3 produces the cross product components C1 = A2B3 −A3B2 C2 = A3B1 −A1B3 C3 = A1B2 −A2B1. The cross product can also be expressed in the form A × B = eijkAjBk êi. This result can be verified by summing over the indices i,j and k. EXAMPLE 1.1-13. Show eijk = −eikj = ejki for i, j, k = 1, 2, 3 Solution: The array i k j represents an odd number of transpositions of the indices i j k and to each transposition there is a sign change of the e-permutation symbol. Similarly, j k i is an even transposition of i j k and so there is no sign change of the e-permutation symbol. The above holds regardless of the numerical values assigned to the indices i, j, k. The e-δ Identity An identity relating the e-permutation symbol and the Kronecker delta, which is useful in the simpli- fication of tensor expressions, is the e-δ identity. This identity can be expressed in different forms. The subscript form for this identity is eijkeimn = δjmδkn − δjnδkm, i, j, k,m, n = 1, 2, 3 where i is the summation index and j, k,m, n are free indices. A device used to remember the positions of the subscripts is given in the figure 1.1-3. The subscripts on the four Kronecker delta’s on the right-hand side of the e-δ identity then are read (first)(second)-(outer)(inner). This refers to the positions following the summation index. Thus, j,m are the first indices after the sum- mation index and k, n are the second indices after the summation index. The indices j, n are outer indices when compared to the inner indices k,m as the indices are viewed as written on the left-hand side of the identity. 13 Figure 1.1-3. Mnemonic device for position of subscripts. Another form of this identity employs both subscripts and superscripts and has the form eijkeimn = δjmδ k n − δjnδkm. (1.1.5) One way of proving this identity is to observe the equation (1.1.5) has the free indices j, k,m, n. Each of these indices can have any of the values of 1, 2 or 3. There are 3 choices we can assign to each of j, k,m or n and this gives a total of 34 = 81 possible equations represented by the identity from equation (1.1.5). By writing out all 81 of these equations we can verify that the identity is true for all possible combinations that can be assigned to the free indices. An alternate proof of the e− δ identity is to consider the determinant∣∣∣∣∣∣ δ11 δ 1 2 δ 1 3 δ21 δ 2 2 δ 2 3 δ31 δ 3 2 δ 3 3 ∣∣∣∣∣∣ = ∣∣∣∣∣∣ 1 0 0 0 1 0 0 0 1 ∣∣∣∣∣∣ = 1. By performing a permutation of the rows of this matrix we can use the permutation symbol and write∣∣∣∣∣∣ δi1 δ i 2 δ i 3 δj1 δ j 2 δ j 3 δk1 δ k 2 δ k 3 ∣∣∣∣∣∣ = eijk. By performing a permutation of the columns, we can write∣∣∣∣∣∣ δir δ i s δ i t δjr δ j s δ j t δkr δ k s δ k t ∣∣∣∣∣∣ = eijkerst. Now perform a contraction on the indices i and r to obtain∣∣∣∣∣∣ δii δ i s δ i t δji δ j s δ j t δki δ k s δ k t ∣∣∣∣∣∣ = eijkeist. Summing on i we have δii = δ 1 1 + δ22 + δ33 = 3 and expand the determinant to obtain the desired result δjsδ k t − δjt δks = eijkeist. 16 since eijk = ejki. We also observe from the expression Fi = eijkCjAk that we may obtain, by permuting the symbols, the equivalent expression Fj = ejkiCkAi. This allows us to write A · ( B × C) = BjFj = B · F = B · (C × A) which was to be shown. The quantity A · ( B × C) is called a triple scalar product. The above index representation of the triple scalar product implies that it can be represented as a determinant (See example 1.1-9). We can write A · ( B × C) = ∣∣∣∣∣∣ A1 A2 A3 B1 B2 B3 C1 C2 C3 ∣∣∣∣∣∣ = eijkAiBjCk A physical interpretation that can be assigned to this triple scalar product is that its absolute value represents the volume of the parallelepiped formed by the three noncoplaner vectors A, B, C. The absolute value is needed because sometimes the triple scalar product is negative. This physical interpretation can be obtained from an analysis of the figure 1.1-4. Figure 1.1-4. Triple scalar product and volume 17 In figure 1.1-4 observe that: (i) | B × C| is the area of the parallelogram PQRS. (ii) the unit vector ên = B × C | B × C| is normal to the plane containing the vectors B and C. (iii) The dot product ∣∣ A · ên∣∣ = ∣∣∣∣ A · B × C| B × C| ∣∣∣∣ = h equals the projection of A on ên which represents the height of the parallelepiped. These results demonstrate that ∣∣∣ A · ( B × C)∣∣∣ = | B × C|h = (area of base)(height) = volume. EXAMPLE 1.1-16. Verify the vector identity ( A× B)× (C × D) = C( D · A× B)− D(C · A× B) Solution: Let F = A× B = Fi êi and E = C × D = Ei êi. These vectors have the components Fi = eijkAjBk and Em = emnpCnDp where all indices have the range 1, 2, 3. The vector G = F × E = Gi êi has the components Gq = eqimFiEm = eqimeijkemnpAjBkCnDp. From the identity eqim = emqi this can be expressed Gq = (emqiemnp)eijkAjBkCnDp which is now in a form where we can use the e− δ identity applied to the term in parentheses to produce Gq = (δqnδip − δqpδin)eijkAjBkCnDp. Simplifying this expression we have: Gq = eijk [(Dpδip)(Cnδqn)AjBk − (Dpδqp)(Cnδin)AjBk] = eijk [DiCqAjBk −DqCiAjBk] = Cq [DieijkAjBk]−Dq [CieijkAjBk] which are the vector components of the vector C( D · A× B)− D(C · A× B). 18 Transformation Equations Consider two sets of N independent variables which are denoted by the barred and unbarred symbols xi and xi with i = 1, . . . , N. The independent variables xi, i = 1, . . . , N can be thought of as defining the coordinates of a point in a N−dimensional space. Similarly, the independent barred variables define a point in some other N−dimensional space. These coordinates are assumed to be real quantities and are not complex quantities. Further, we assume that these variables are related by a set of transformation equations. xi = xi(x1, x2, . . . , xN ) i = 1, . . . , N. (1.1.7) It is assumed that these transformation equations are independent. A necessary and sufficient condition that these transformation equations be independent is that the Jacobian determinant be different from zero, that is J( x x ) = ∣∣∣∣ ∂xi∂x̄j ∣∣∣∣ = ∣∣∣∣∣∣∣∣∣∣ ∂x1 ∂x1 ∂x1 ∂x2 · · · ∂x1 ∂xN ∂x2 ∂x1 ∂x2 ∂x2 · · · ∂x2 ∂xN ... ... . . . ... ∂xN ∂x1 ∂xN ∂x2 · · · ∂xN ∂xN ∣∣∣∣∣∣∣∣∣∣ = 0. This assumption allows us to obtain a set of inverse relations xi = xi(x1, x2, . . . , xN ) i = 1, . . . , N, (1.1.8) where the x′s are determined in terms of the x′s. Throughout our discussions it is to be understood that the given transformation equations are real and continuous. Further all derivatives that appear in our discussions are assumed to exist and be continuous in the domain of the variables considered. EXAMPLE 1.1-17. The following is an example of a set of transformation equations of the form defined by equations (1.1.7) and (1.1.8) in the case N = 3. Consider the transformation from cylindrical coordinates (r, α, z) to spherical coordinates (ρ, β, α). From the geometry of the figure 1.1-5 we can find the transformation equations r = ρ sinβ α = α 0 < α < 2π z = ρ cosβ 0 < β < π with inverse transformation ρ = √ r2 + z2 α = α β = arctan( r z ) Now make the substitutions (x1, x2, x3) = (r, α, z) and (x1, x2, x3) = (ρ, β, α). 21 Gradient. In Cartesian coordinates the gradient of a scalar field is gradφ = ∂φ ∂x ê1 + ∂φ ∂y ê2 + ∂φ ∂z ê3. The index notation focuses attention only on the components of the gradient. In Cartesian coordinates these components are represented using a comma subscript to denote the derivative êj · gradφ = φ,j = ∂φ ∂xj , j = 1, 2, 3. The comma notation will be discussed in section 4. For now we use it to denote derivatives. For example φ ,j = ∂φ ∂xj , φ ,jk = ∂2φ ∂xj∂xk , etc. Divergence. In Cartesian coordinates the divergence of a vector field A is a scalar field and can be represented ∇ · A = div A = ∂A1 ∂x + ∂A2 ∂y + ∂A3 ∂z . Employing the summation convention and index notation, the divergence in Cartesian coordinates can be represented ∇ · A = div A = Ai,i = ∂Ai ∂xi = ∂A1 ∂x1 + ∂A2 ∂x2 + ∂A3 ∂x3 where i is the dummy summation index. Curl. To represent the vector B = curl A = ∇ × A in Cartesian coordinates, we note that the index notation focuses attention only on the components of this vector. The components Bi, i = 1, 2, 3 of B can be represented Bi = êi · curl A = eijkAk,j , for i, j, k = 1, 2, 3 where eijk is the permutation symbol introduced earlier and Ak,j = ∂Ak∂xj . To verify this representation of the curl A we need only perform the summations indicated by the repeated indices. We have summing on j that Bi = ei1kAk,1 + ei2kAk,2 + ei3kAk,3. Now summing each term on the repeated index k gives us Bi = ei12A2,1 + ei13A3,1 + ei21A1,2 + ei23A3,2 + ei31A1,3 + ei32A2,3 Here i is a free index which can take on any of the values 1, 2 or 3. Consequently, we have For i = 1, B1 = A3,2 −A2,3 = ∂A3 ∂x2 − ∂A2 ∂x3 For i = 2, B2 = A1,3 −A3,1 = ∂A1 ∂x3 − ∂A3 ∂x1 For i = 3, B3 = A2,1 −A1,2 = ∂A2 ∂x1 − ∂A1 ∂x2 which verifies the index notation representation of curl A in Cartesian coordinates. 22 Other Operations. The following examples illustrate how the index notation can be used to represent additional vector operators in Cartesian coordinates. 1. In index notation the components of the vector ( B · ∇) A are {( B · ∇) A} · êp = Ap,qBq p, q = 1, 2, 3 This can be verified by performing the indicated summations. We have by summing on the repeated index q Ap,qBq = Ap,1B1 +Ap,2B2 +Ap,3B3. The index p is now a free index which can have any of the values 1, 2 or 3. We have: for p = 1, A1,qBq = A1,1B1 +A1,2B2 +A1,3B3 = ∂A1 ∂x1 B1 + ∂A1 ∂x2 B2 + ∂A1 ∂x3 B3 for p = 2, A2,qBq = A2,1B1 +A2,2B2 +A2,3B3 = ∂A2 ∂x1 B1 + ∂A2 ∂x2 B2 + ∂A2 ∂x3 B3 for p = 3, A3,qBq = A3,1B1 +A3,2B2 +A3,3B3 = ∂A3 ∂x1 B1 + ∂A3 ∂x2 B2 + ∂A3 ∂x3 B3 2. The scalar ( B · ∇)φ has the following form when expressed in the index notation: ( B · ∇)φ = Biφ,i = B1φ,1 +B2φ,2 +B3φ,3 = B1 ∂φ ∂x1 +B2 ∂φ ∂x2 +B3 ∂φ ∂x3 . 3. The components of the vector ( B ×∇)φ is expressed in the index notation by êi · [ ( B ×∇)φ ] = eijkBjφ,k. This can be verified by performing the indicated summations and is left as an exercise. 4. The scalar ( B ×∇) · A may be expressed in the index notation. It has the form ( B ×∇) · A = eijkBjAi,k. This can also be verified by performing the indicated summations and is left as an exercise. 5. The vector components of ∇2 A in the index notation are represented êp · ∇2 A = Ap,qq . The proof of this is left as an exercise. 23 EXAMPLE 1.1-19. In Cartesian coordinates prove the vector identity curl (f A) = ∇× (f A) = (∇f)× A+ f(∇× A). Solution: Let B = curl (f A) and write the components as Bi = eijk(fAk),j = eijk [fAk,j + f,jAk] = feijkAk,j + eijkf,jAk. This index form can now be expressed in the vector form B = curl (f A) = f(∇× A) + (∇f)× A EXAMPLE 1.1-20. Prove the vector identity ∇ · ( A+ B) = ∇ · A+∇ · B Solution: Let A+ B = C and write this vector equation in the index notation as Ai + Bi = Ci. We then have ∇ · C = Ci,i = (Ai +Bi),i = Ai,i +Bi,i = ∇ · A+∇ · B. EXAMPLE 1.1-21. In Cartesian coordinates prove the vector identity ( A · ∇)f = A · ∇f Solution: In the index notation we write ( A · ∇)f = Aif,i = A1f,1 +A2f,2 +A3f,3 = A1 ∂f ∂x1 +A2 ∂f ∂x2 +A3 ∂f ∂x3 = A · ∇f. EXAMPLE 1.1-22. In Cartesian coordinates prove the vector identity ∇× ( A× B) = A(∇ · B)− B(∇ · A) + ( B · ∇) A− ( A · ∇) B Solution: The pth component of the vector ∇× ( A× B) is êp · [∇× ( A× B)] = epqk[ekjiAjBi],q = epqkekjiAjBi,q + epqkekjiAj,qBi By applying the e− δ identity, the above expression simplifies to the desired result. That is, êp · [∇× ( A× B)] = (δpjδqi − δpiδqj)AjBi,q + (δpjδqi − δpiδqj)Aj,qBi = ApBi,i −AqBp,q +Ap,qBq −Aq,qBp In vector form this is expressed ∇× ( A× B) = A(∇ · B)− ( A · ∇) B + ( B · ∇) A− B(∇ · A) 26 Other forms of this identity are eijkari a s ja t k = |A|erst and eijkairajsakt = |A|erst. (1.1.19) Consider the representation of the determinant |A| = ∣∣∣∣∣∣ a11 a 1 2 a 1 3 a21 a 2 2 a 2 3 a31 a 3 2 a 3 3 ∣∣∣∣∣∣ by use of the indicial notation. By column expansions, this determinant can be represented |A| = erstar1as2at3 (1.1.20) and if one uses row expansions the determinant can be expressed as |A| = eijka1i a2ja3k. (1.1.21) Define Aim as the cofactor of the element ami in the determinant |A|. From the equation (1.1.20) the cofactor of ar1 is obtained by deleting this element and we find A1r = ersta s 2a t 3. (1.1.22) The result (1.1.20) can then be expressed in the form |A| = ar1A1r = a11A11 + a21A12 + a31A13. (1.1.23) That is, the determinant |A| is obtained by multiplying each element in the first column by its corresponding cofactor and summing the result. Observe also that from the equation (1.1.20) we find the additional cofactors A2s = ersta r 1a t 3 and A 3 t = ersta r 1a s 2. (1.1.24) Hence, the equation (1.1.20) can also be expressed in one of the forms |A| = as2A2s = a12A21 + a22A22 + a32A23 |A| = at3A3t = a13A31 + a23A32 + a33A33 The results from equations (1.1.22) and (1.1.24) can be written in a slightly different form with the indicial notation. From the notation for a generalized Kronecker delta defined by eijkelmn = δ ijk lmn, the above cofactors can be written in the form A1r = e 123ersta s 2a t 3 = 1 2! e1jkersta s ja t k = 1 2! δ1jkrst a s ja t k A2r = e 123esrta s 1a t 3 = 1 2! e2jkersta s ja t k = 1 2! δ2jkrst a s ja t k A3r = e 123etsra t 1a s 2 = 1 2! e3jkersta s ja t k = 1 2! δ3jkrst a s ja t k. 27 These cofactors are then combined into the single equation Air = 1 2! δijkrsta s ja t k (1.1.25) which represents the cofactor of ari . When the elements from any row (or column) are multiplied by their corresponding cofactors, and the results summed, we obtain the value of the determinant. Whenever the elements from any row (or column) are multiplied by the cofactor elements from a different row (or column), and the results summed, we get zero. This can be illustrated by considering the summation amr A i m = 1 2! δijkmsta s ja t ka m r = 1 2! eijkemsta m r a s ja t k = 1 2! eijkerjk|A| = 12!δ ijk rjk|A| = δir|A| Here we have used the e− δ identity to obtain δijkrjk = e ijkerjk = ejikejrk = δirδ k k − δikδkr = 3δir − δir = 2δir which was used to simplify the above result. As an exercise one can show that an alternate form of the above summation of elements by its cofactors is armA m i = |A|δri . EXAMPLE 1.1-26. In N-dimensions the quantity δj1j2...jNk1k2...kN is called a generalized Kronecker delta. It can be defined in terms of permutation symbols as ej1j2...jN ek1k2...kN = δ j1j2...jN k1k2...kN (1.1.26) Observe that δj1j2...jNk1k2...kN e k1k2...kN = (N !) ej1j2...jN This follows because ek1k2...kN is skew-symmetric in all pairs of its superscripts. The left-hand side denotes a summation of N ! terms. The first term in the summation has superscripts j1j2 . . . jN and all other terms have superscripts which are some permutation of this ordering with minus signs associated with those terms having an odd permutation. Because ej1j2...jN is completely skew-symmetric we find that all terms in the summation have the value +ej1j2...jN . We thus obtain N ! of these terms. 28 EXERCISE 1.1  1. Simplify each of the following by employing the summation property of the Kronecker delta. Perform sums on the summation indices only if your are unsure of the result. (a) eijkδkn (b) eijkδisδjm (c) eijkδisδjmδkn (d) aijδin (e) δijδjn (f) δijδjnδni  2. Simplify and perform the indicated summations over the range 1, 2, 3 (a) δii (b) δijδij (c) eijkAiAjAk (d) eijkeijk (e) eijkδjk (f) AiBjδji −BmAnδmn  3. Express each of the following in index notation. Be careful of the notation you use. Note that A = Ai is an incorrect notation because a vector can not equal a scalar. The notation A · êi = Ai should be used to express the ith component of a vector. (a) A · ( B × C) (b) A× ( B × C) (c) B( A · C) (d) B( A · C)− C( A · B)  4. Show the e permutation symbol satisfies: (a) eijk = ejki = ekij (b) eijk = −ejik = −eikj = −ekji  5. Use index notation to verify the vector identity A× ( B × C) = B( A · C)− C( A · B)  6. Let yi = aijxj and xm = aimzi where the range of the indices is 1, 2 (a) Solve for yi in terms of zi using the indicial notation and check your result to be sure that no index repeats itself more than twice. (b) Perform the indicated summations and write out expressions for y1, y2 in terms of z1, z2 (c) Express the above equations in matrix form. Expand the matrix equations and check the solution obtained in part (b).  7. Use the e− δ identity to simplify (a) eijkejik (b) eijkejki  8. Prove the following vector identities: (a) A · ( B × C) = B · (C × A) = C · ( A× B) triple scalar product (b) ( A× B)× C = B( A · C)− A( B · C)  9. Prove the following vector identities: (a) ( A× B) · (C × D) = ( A · C)( B · D)− ( A · D)( B · C) (b) A× ( B × C) + B × (C × A) + C × ( A× B) = 0 (c) ( A× B)× (C × D) = B( A · C × D)− A( B · C × D) 31  26. (Generalized Kronecker delta) Define the generalized Kronecker delta as the n×n determinant δij...kmn...p = ∣∣∣∣∣∣∣∣∣ δim δ i n · · · δip δjm δ j n · · · δjp ... ... . . . ... δkm δ k n · · · δkp ∣∣∣∣∣∣∣∣∣ where δrs is the Kronecker delta. (a) Show eijk = δ123ijk (b) Show eijk = δijk123 (c) Show δijmn = e ijemn (d) Define δrsmn = δ rsp mnp (summation on p) and show δrsmn = δ r mδ s n − δrnδsm Note that by combining the above result with the result from part (c) we obtain the two dimensional form of the e− δ identity ersemn = δrmδsn − δrnδsm. (e) Define δrm = 1 2δ rn mn (summation on n) and show δ rst pst = 2δ r p (f) Show δrstrst = 3!  27. Let Air denote the cofactor of ari in the determinant ∣∣∣∣∣∣ a11 a 1 2 a 1 3 a21 a 2 2 a 2 3 a31 a 3 2 a 3 3 ∣∣∣∣∣∣ as given by equation (1.1.25). (a) Show erstAir = e ijkasja t k (b) Show erstA r i = eijka j sa k t  28. (a) Show that if Aijk = Ajik , i, j, k = 1, 2, 3 there is a total of 27 elements, but only 18 are distinct. (b) Show that for i, j, k = 1, 2, . . . , N there are N3 elements, but only N2(N + 1)/2 are distinct.  29. Let aij = BiBj for i, j = 1, 2, 3 where B1, B2, B3 are arbitrary constants. Calculate det(aij) = |A|.  30. (a) For A = (aij), i, j = 1, 2, 3, show |A| = eijkai1aj2ak3. (b) For A = (aij), i, j = 1, 2, 3, show |A| = eijkai1aj2ak3 . (c) For A = (aij), i, j = 1, 2, 3, show |A| = eijka1i a2ja3k. (d) For I = (δij), i, j = 1, 2, 3, show |I| = 1.  31. Let |A| = eijkai1aj2ak3 and define Aim as the cofactor of aim. Show the determinant can be expressed in any of the forms: (a) |A| = Ai1ai1 where Ai1 = eijkaj2ak3 (b) |A| = Aj2aj2 where Ai2 = ejikaj1ak3 (c) |A| = Ak3ak3 where Ai3 = ejkiaj1ak2 32  32. Show the results in problem 31 can be written in the forms: Ai1 = 1 2! e1steijkajsakt, Ai2 = 1 2! e2steijkajsakt, Ai3 = 1 2! e3steijkajsakt, or Aim = 1 2! emsteijkajsakt  33. Use the results in problems 31 and 32 to prove that apmAim = |A|δip.  34. Let (aij) =  1 2 11 0 3 2 3 2  and calculate C = aijaij , i, j = 1, 2, 3.  35. Let a111 = −1, a112 = 3, a121 = 4, a122 = 2 a211 = 1, a212 = 5, a221 = 2, a222 = −2 and calculate the quantity C = aijkaijk , i, j, k = 1, 2.  36. Let a1111 = 2, a1112 = 1, a1121 = 3, a1122 = 1 a1211 = 5, a1212 = −2, a1221 = 4, a1222 = −2 a2111 = 1, a2112 = 0, a2121 = −2, a2122 = −1 a2211 = −2, a2212 = 1, a2221 = 2, a2222 = 2 and calculate the quantity C = aijklaijkl, i, j, k, l = 1, 2.  37. Simplify the expressions: (a) (Aijkl +Ajkli +Aklij +Alijk)xixjxkxl (b) (Pijk + Pjki + Pkij)xixjxk (c) ∂xi ∂xj (d) aij ∂2xi ∂xt∂xs ∂xj ∂xr − ami ∂ 2xm ∂xs∂xt ∂xi ∂xr  38. Let g denote the determinant of the matrix having the components gij , i, j = 1, 2, 3. Show that (a) g erst = ∣∣∣∣∣∣ g1r g1s g1t g2r g2s g2t g3r g3s g3t ∣∣∣∣∣∣ (b) g ersteijk = ∣∣∣∣∣∣ gir gis git gjr gjs gjt gkr gks gkt ∣∣∣∣∣∣  39. Show that eijkemnp = δijkmnp = ∣∣∣∣∣∣ δim δ i n δ i p δjm δ j n δ j p δkm δ k n δ k p ∣∣∣∣∣∣  40. Show that eijkemnpAmnp = Aijk −Aikj +Akij −Ajik +Ajki −Akji Hint: Use the results from problem 39.  41. Show that (a) eijeij = 2! (b) eijkeijk = 3! (c) eijkleijkl = 4! (d) Guess at the result ei1i2...in ei1i2...in 33  42. Determine if the following statement is true or false. Justify your answer. eijkAiBjCk = eijkAjBkCi.  43. Let aij , i, j = 1, 2 denote the components of a 2× 2 matrix A, which are functions of time t. (a) Expand both |A| = eijai1aj2 and |A| = ∣∣∣∣ a11 a12a21 a22 ∣∣∣∣ to verify that these representations are the same. (b) Verify the equivalence of the derivative relations d|A| dt = eij dai1 dt aj2 + eijai1 daj2 dt and d|A| dt = ∣∣∣∣ da11dt da12dta21 a22 ∣∣∣∣+ ∣∣∣∣ a11 a12da21 dt da22 dt ∣∣∣∣ (c) Let aij , i, j = 1, 2, 3 denote the components of a 3× 3 matrix A, which are functions of time t. Develop appropriate relations, expand them and verify, similar to parts (a) and (b) above, the representation of a determinant and its derivative.  44. For f = f(x1, x2, x3) and φ = φ(f) differentiable scalar functions, use the indicial notation to find a formula to calculate gradφ .  45. Use the indicial notation to prove (a) ∇×∇φ = 0 (b) ∇ · ∇ × A = 0  46. If Aij is symmetric and Bij is skew-symmetric, i, j = 1, 2, 3, then calculate C = AijBij .  47. Assume Aij = Aij(x1, x2, x3) and Aij = Aij(x1, x2, x3) for i, j = 1, 2, 3 are related by the expression Amn = Aij ∂xi ∂xm ∂xj ∂xn . Calculate the derivative ∂Amn ∂xk .  48. Prove that if any two rows (or two columns) of a matrix are interchanged, then the value of the determinant of the matrix is multiplied by minus one. Construct your proof using 3× 3 matrices.  49. Prove that if two rows (or columns) of a matrix are proportional, then the value of the determinant of the matrix is zero. Construct your proof using 3× 3 matrices.  50. Prove that if a row (or column) of a matrix is altered by adding some constant multiple of some other row (or column), then the value of the determinant of the matrix remains unchanged. Construct your proof using 3× 3 matrices.  51. Simplify the expression φ = eijkemnAiAjmAkn.  52. Let Aijk denote a third order system where i, j, k = 1, 2. (a) How many components does this system have? (b) Let Aijk be skew-symmetric in the last pair of indices, how many independent components does the system have?  53. Let Aijk denote a third order system where i, j, k = 1, 2, 3. (a) How many components does this system have? (b) In addition let Aijk = Ajik and Aikj = −Aijk and determine the number of distinct nonzero components for Aijk . 36 similar manner it can be demonstrated that for ( E1, E2, E3) a given set of basis vectors, then the reciprocal basis vectors are determined from the relations E1 = 1 V E2 × E3, E2 = 1 V E3 × E1, E3 = 1 V E1 × E2, where V = E1 · ( E2 × E3) = 0 is a triple scalar product and represents the volume of the parallelepiped having the basis vectors for its sides. Let ( E1, E2, E3) and ( E1, E2, E3) denote a system of reciprocal bases. We can represent any vector A with respect to either of these bases. If we select the basis ( E1, E2, E3) and represent A in the form A = A1 E1 +A2 E2 +A3 E3, (1.2.1) then the components (A1, A2, A3) of A relative to the basis vectors ( E1, E2, E3) are called the contravariant components of A. These components can be determined from the equations A · E1 = A1, A · E2 = A2, A · E3 = A3. Similarly, if we choose the reciprocal basis ( E1, E2, E3) and represent A in the form A = A1 E1 +A2 E2 +A3 E3, (1.2.2) then the components (A1, A2, A3) relative to the basis ( E1, E2, E3) are called the covariant components of A. These components can be determined from the relations A · E1 = A1, A · E2 = A2, A · E3 = A3. The contravariant and covariant components are different ways of representing the same vector with respect to a set of reciprocal basis vectors. There is a simple relationship between these components which we now develop. We introduce the notation Ei · Ej = gij = gji, and Ei · Ej = gij = gji (1.2.3) where gij are called the metric components of the space and gij are called the conjugate metric components of the space. We can then write A · E1 = A1( E1 · E1) +A2( E2 · E1) +A3( E3 · E1) = A1 A · E1 = A1( E1 · E1) +A2( E2 · E1) +A3( E3 · E1) = A1 or A1 = A1g11 +A2g12 +A3g13. (1.2.4) In a similar manner, by considering the dot products A · E2 and A · E3 one can establish the results A2 = A1g21 +A2g22 +A3g23 A3 = A1g31 +A2g32 +A3g33. These results can be expressed with the index notation as Ai = gikAk. (1.2.6) Forming the dot products A · E1, A · E2, A · E3 it can be verified that Ai = gikAk. (1.2.7) The equations (1.2.6) and (1.2.7) are relations which exist between the contravariant and covariant compo- nents of the vector A. Similarly, if for some value j we have Ej = α E1 + β E2 + γ E3, then one can show that Ej = gij Ei. This is left as an exercise. 37 Coordinate Transformations Consider a coordinate transformation from a set of coordinates (x, y, z) to (u, v, w) defined by a set of transformation equations x = x(u, v, w) y = y(u, v, w) z = z(u, v, w) (1.2.8) It is assumed that these transformations are single valued, continuous and possess the inverse transformation u = u(x, y, z) v = v(x, y, z) w = w(x, y, z). (1.2.9) These transformation equations define a set of coordinate surfaces and coordinate curves. The coordinate surfaces are defined by the equations u(x, y, z) = c1 v(x, y, z) = c2 w(x, y, z) = c3 (1.2.10) where c1, c2, c3 are constants. These surfaces intersect in the coordinate curves r(u, c2, c3), r(c1, v, c3), r(c1, c2, w), (1.2.11) where r(u, v, w) = x(u, v, w) ê1 + y(u, v, w) ê2 + z(u, v, w) ê3. The general situation is illustrated in the figure 1.2-1. Consider the vectors E1 = gradu = ∇u, E2 = gradv = ∇v, E3 = gradw = ∇w (1.2.12) evaluated at the common point of intersection (c1, c2, c3) of the coordinate surfaces. The system of vectors ( E1, E2, E3) can be selected as a system of basis vectors which are normal to the coordinate surfaces. Similarly, the vectors E1 = ∂r ∂u , E2 = ∂r ∂v , E3 = ∂r ∂w (1.2.13) when evaluated at the common point of intersection (c1, c2, c3) forms a system of vectors ( E1, E2, E3) which we can select as a basis. This basis is a set of tangent vectors to the coordinate curves. It is now demonstrated that the normal basis ( E1, E2, E3) and the tangential basis ( E1, E2, E3) are a set of reciprocal bases. Recall that r = x ê1 + y ê2 + z ê3 denotes the position vector of a variable point. By substitution for x, y, z from (1.2.8) there results r = r(u, v, w) = x(u, v, w) ê1 + y(u, v, w) ê2 + z(u, v, w) ê3. (1.2.14) 38 Figure 1.2-1. Coordinate curves and coordinate surfaces. A small change in r is denoted dr = dx ê1 + dy ê2 + dz ê3 = ∂r ∂u du+ ∂r ∂v dv + ∂r ∂w dw (1.2.15) where ∂r ∂u = ∂x ∂u ê1 + ∂y ∂u ê2 + ∂z ∂u ê3 ∂r ∂v = ∂x ∂v ê1 + ∂y ∂v ê2 + ∂z ∂v ê3 ∂r ∂w = ∂x ∂w ê1 + ∂y ∂w ê2 + ∂z ∂w ê3. (1.2.16) In terms of the u, v, w coordinates, this change can be thought of as moving along the diagonal of a paral- lelepiped having the vector sides ∂r ∂u du, ∂r ∂v dv, and ∂r ∂w dw. Assume u = u(x, y, z) is defined by equation (1.2.9) and differentiate this relation to obtain du = ∂u ∂x dx+ ∂u ∂y dy + ∂u ∂z dz. (1.2.17) The equation (1.2.15) enables us to represent this differential in the form: du = gradu · dr du = gradu · ( ∂r ∂u du+ ∂r ∂v dv + ∂r ∂w dw ) du = ( gradu · ∂r ∂u ) du+ ( gradu · ∂r ∂v ) dv + ( gradu · ∂r ∂w ) dw. (1.2.18) By comparing like terms in this last equation we find that E1 · E1 = 1, E1 · E2 = 0, E1 · E3 = 0. (1.2.19) Similarly, from the other equations in equation (1.2.9) which define v = v(x, y, z), and w = w(x, y, z) it can be demonstrated that dv = ( grad v · ∂r ∂u ) du+ ( grad v · ∂r ∂v ) dv + ( grad v · ∂r ∂w ) dw (1.2.20) 41 Transformations Form a Group A group G is a nonempty set of elements together with a law, for combining the elements. The combined elements are denoted by a product. Thus, if a and b are elements in G then no matter how you define the law for combining elements, the product combination is denoted ab. The set G and combining law forms a group if the following properties are satisfied: (i) For all a, b ∈ G, then ab ∈ G. This is called the closure property. (ii) There exists an identity element I such that for all a ∈ G we have Ia = aI = a. (iii) There exists an inverse element. That is, for all a ∈ G there exists an inverse element a−1 such that a a−1 = a−1a = I. (iv) The associative law holds under the combining law and a(bc) = (ab)c for all a, b, c ∈ G. For example, the set of elements G = {1,−1, i,−i}, where i2 = −1 together with the combining law of ordinary multiplication, forms a group. This can be seen from the multiplication table. × 1 -1 i -i 1 1 -1 i -i -1 -1 1 -i i -i -i i 1 -1 i i -i -1 1 The set of all coordinate transformations of the form found in equation (1.2.30), with Jacobian different from zero, forms a group because: (i) The product transformation, which consists of two successive transformations, belongs to the set of transformations. (closure) (ii) The identity transformation exists in the special case that x and x are the same coordinates. (iii) The inverse transformation exists because the Jacobian of each individual transformation is different from zero. (iv) The associative law is satisfied in that the transformations satisfy the property T3(T2T1) = (T3T2)T1. When the given transformation equations contain a parameter the combining law is often times repre- sented as a product of symbolic operators. For example, we denote by Tα a transformation of coordinates having a parameter α. The inverse transformation can be denoted by T−1α and one can write Tαx = x or x = T−1α x. We let Tβ denote the same transformation, but with a parameter β, then the transitive property is expressed symbolically by TαTβ = Tγ where the product TαTβ represents the result of performing two successive transformations. The first coordinate transformation uses the given transformation equations and uses the parameter α in these equations. This transformation is then followed by another coordinate trans- formation using the same set of transformation equations, but this time the parameter value is β. The above symbolic product is used to demonstrate that the result of applying two successive transformations produces a result which is equivalent to performing a single transformation of coordinates having the parameter value γ. Usually some relationship can then be established between the parameter values α, β and γ. 42 Figure 1.2-2. Cylindrical coordinates. In this symbolic notation, we let Tθ denote the identity transformation. That is, using the parameter value of θ in the given set of transformation equations produces the identity transformation. The inverse transformation can then be expressed in the form of finding the parameter value β such that TαTβ = Tθ. Cartesian Coordinates At times it is convenient to introduce an orthogonal Cartesian coordinate system having coordinates yi, i = 1, 2, . . . , N. This space is denoted EN and represents an N-dimensional Euclidean space. Whenever the generalized independent coordinates xi, i = 1, . . . , N are functions of the y′s, and these equations are functionally independent, then there exists independent transformation equations yi = yi(x1, x2, . . . , xN ), i = 1, 2, . . . , N, (1.2.34) with Jacobian different from zero. Similarly, if there is some other set of generalized coordinates, say a barred system xi, i = 1, . . . , N where the x′s are independent functions of the y′s, then there will exist another set of independent transformation equations yi = yi(x1, x2, . . . , xN ), i = 1, 2, . . . , N, (1.2.35) with Jacobian different from zero. The transformations found in the equations (1.2.34) and (1.2.35) imply that there exists relations between the x′s and x′s of the form (1.2.30) with inverse transformations of the form (1.2.32). It should be remembered that the concepts and ideas developed in this section can be applied to a space VN of any finite dimension. Two dimensional surfaces (N = 2) and three dimensional spaces (N = 3) will occupy most of our applications. In relativity, one must consider spaces where N = 4. EXAMPLE 1.2-1. (cylindrical coordinates (r, θ, z)) Consider the transformation x = x(r, θ, z) = r cos θ y = y(r, θ, z) = r sin θ z = z(r, θ, z) = z from rectangular coordinates (x, y, z) to cylindrical coordinates (r, θ, z), illustrated in the figure 1.2-2. By letting y1 = x, y2 = y, y3 = z x1 = r, x2 = θ, x3 = z the above set of equations are examples of the transformation equations (1.2.8) with u = r, v = θ, w = z as the generalized coordinates. 43 EXAMPLE 1.2.2. (Spherical Coordinates) (ρ, θ, φ) Consider the transformation x = x(ρ, θ, φ) = ρ sin θ cosφ y = y(ρ, θ, φ) = ρ sin θ sinφ z = z(ρ, θ, φ) = ρ cos θ from rectangular coordinates (x, y, z) to spherical coordinates (ρ, θ, φ). By letting y1 = x, y2 = y, y3 = z x1 = ρ, x2 = θ , x3 = φ the above set of equations has the form found in equation (1.2.8) with u = ρ, v = θ, w = φ the generalized coordinates. One could place bars over the x′s in this example in order to distinguish these coordinates from the x′s of the previous example. The spherical coordinates (ρ, θ, φ) are illustrated in the figure 1.2-3. Figure 1.2-3. Spherical coordinates. Scalar Functions and Invariance We are now at a point where we can begin to define what tensor quantities are. The first definition is for a scalar invariant or tensor of order zero. 46 From the fact that ∂x i ∂xj ∂xj ∂xm = ∂x i ∂xm , the equation (1.2.44) simplifies to A i (x) = ∂x i ∂xm Am(x) (1.2.45) and hence this transformation is also contravariant. We express this by saying that the above are transitive with respect to the group of coordinate transformations. Note that from the chain rule one can write ∂xm ∂xj ∂xj ∂xn = ∂xm ∂x1 ∂x1 ∂xn + ∂xm ∂x2 ∂x2 ∂xn + ∂xm ∂x3 ∂x3 ∂xn = ∂xm ∂xn = δmn . Do not make the mistake of writing ∂xm ∂x2 ∂x2 ∂xn = ∂xm ∂xn or ∂xm ∂x3 ∂x3 ∂xn = ∂xm ∂xn as these expressions are incorrect. Note that there are no summations in these terms, whereas there is a summation index in the representation of the chain rule. Vector Transformation, Covariant Components Consider a scalar invariant A(x) = A(x) which is a shorthand notation for the equation A(x1, x2, . . . , xn) = A(x1, x2, . . . , xn) involving the coordinate transformation of equation (1.2.30). By the chain rule we differentiate this invariant and find that the components of the gradient must satisfy ∂A ∂xi = ∂A ∂xj ∂xj ∂xi . (1.2.46) Let Aj = ∂A ∂xj and Ai = ∂A ∂xi , then equation (1.2.46) can be expressed as the transformation law Ai = Aj ∂xj ∂xi . (1.2.47) This is the transformation law for an absolute covariant tensor of rank or order one. A more general definition is 47 Definition: (Covariant tensor) Whenever N quantities Ai in a coordinate system (x1, . . . , xN ) are related to N quantities Ai in a co- ordinate system (x1, . . . , xN ), with Jacobian J different from zero, such that the transformation law Ai = JW ∂xj ∂xi Aj (1.2.48) is satisfied, then these quantities are called the components of a relative covariant tensor of rank or order one having a weight of W . When- ever W = 0, these quantities are called the components of an absolute covariant tensor of rank or order one. Again we note that the above transformation satisfies the group properties. Absolute tensors of rank or order one are referred to as vectors while absolute tensors of rank or order zero are referred to as scalars. EXAMPLE 1.2-4. (Transitive Property of Covariant Transformation) Consider a sequence of transformation laws of the type defined by the equation (1.2.47) x→ x x→ x Ai(x) = Aj(x) ∂xj ∂xi Ak(x) = Am(x) ∂xm ∂x k We can therefore express the transformation of the components associated with the coordinate transformation x→ x and Ak(x) = ( Aj(x) ∂xj ∂xm ) ∂xm ∂x k = Aj(x) ∂xj ∂x k , which demonstrates the transitive property of a covariant transformation. Higher Order Tensors We have shown that first order tensors are quantities which obey certain transformation laws. Higher order tensors are defined in a similar manner and also satisfy the group properties. We assume that we are given transformations of the type illustrated in equations (1.2.30) and (1.2.32) which are single valued and continuous with Jacobian J different from zero. Further, the quantities xi and xi, i = 1, . . . , n represent the coordinates in any two coordinate systems. The following transformation laws define second order and third order tensors. 48 Definition: (Second order contravariant tensor) Whenever N-squared quantities Aij in a coordinate system (x1, . . . , xN ) are related to N-squared quantities A mn in a coordinate system (x1, . . . , xN ) such that the transformation law A mn (x) = Aij(x)JW ∂xm ∂xi ∂xn ∂xj (1.2.49) is satisfied, then these quantities are called components of a relative contravariant tensor of rank or order two with weightW . WheneverW = 0 these quantities are called the components of an absolute contravariant tensor of rank or order two. Definition: (Second order covariant tensor) Whenever N-squared quantities Aij in a coordinate system (x1, . . . , xN ) are related to N-squared quantities Amn in a coordinate system (x1, . . . , xN ) such that the transformation law Amn(x) = Aij(x)JW ∂xi ∂xm ∂xj ∂xn (1.2.50) is satisfied, then these quantities are called components of a relative covariant tensor of rank or order two with weight W . Whenever W = 0 these quantities are called the components of an absolute covariant tensor of rank or order two. Definition: (Second order mixed tensor) Whenever N-squared quantities Aij in a coordinate system (x 1, . . . , xN ) are related to N-squared quantities A m n in a coordinate system (x1, . . . , xN ) such that the transformation law A m n (x) = A i j(x)J W ∂x m ∂xi ∂xj ∂xn (1.2.51) is satisfied, then these quantities are called components of a relative mixed tensor of rank or order two with weight W . Whenever W = 0 these quantities are called the components of an absolute mixed tensor of rank or order two. It is contravariant of order one and covariant of order one. Higher order tensors are defined in a similar manner. For example, if we can find N-cubed quantities Amnp such that A i jk(x) = A γ αβ(x)J W ∂x i ∂xγ ∂xα ∂xj ∂xβ ∂xk (1.2.52) then this is a relative mixed tensor of order three with weight W . It is contravariant of order one and covariant of order two. 51 If a dyadic equals its conjugate A = Ac, then Aij = Aji and the dyadic is called symmetric. If a dyadic equals the negative of its conjugate A = −Ac, then Aij = −Aji and the dyadic is called skew-symmetric. A special dyadic called the identical dyadic or idemfactor is defined by J = ê1 ê1 + ê2 ê2 + ê3 ê3. This dyadic has the property that pre or post dot product multiplication of J with a vector V produces the same vector V . For example, V · J = (V1 ê1 + V2 ê2 + V3 ê3) · J = V1 ê1 · ê1 ê1 + V2 ê2 · ê2 ê2 + V3 ê3 · ê3 ê3 = V and J · V = J · (V1 ê1 + V2 ê2 + V3 ê3) = V1 ê1 ê1 · ê1 + V2 ê2 ê2 · ê2 + V3 ê3 ê3 · ê3 = V A dyadic operation often used in physics and chemistry is the double dot product A : B where A and B are both dyadics. Here both dyadics are expanded using the distributive law of multiplication, and then each unit dyad pair êi êj : êm ên are combined according to the rule êi êj : êm ên = ( êi · êm)( êj · ên). For example, if A = Aij êi êj and B = Bij êi êj, then the double dot product A : B is calculated as follows. A : B = (Aij êi êj) : (Bmn êm ên) = AijBmn( êi êj : êm ên) = AijBmn( êi · êm)( êj · ên) = AijBmnδimδjn = AmjBmj = A11B11 +A12B12 +A13B13 +A21B21 +A22B22 +A23B23 +A31B31 +A32B32 +A33B33 When operating with dyads, triads and polyads, there is a definite order to the way vectors and polyad components are represented. For example, for A = Ai êi and B = Bi êi vectors with outer product AB = AmBn êm ên = φ there is produced the dyadic φ with components AmBn. In comparison, the outer product B A = BmAn êm ên = ψ produces the dyadic ψ with components BmAn. That is φ = AB =A1B1 ê1 ê1 +A1B2 ê1 ê2 +A1B3 ê1 ê3 A2B1 ê2 ê1 +A2B2 ê2 ê2 +A2B3 ê2 ê3 A3B1 ê3 ê1 +A3B2 ê3 ê2 +A3B3 ê3 ê3 and ψ = B A =B1A1 ê1 ê1 +B1A2 ê1 ê2 +B1A3 ê1 ê3 B2A1 ê2 ê1 +B2A2 ê2 ê2 +B2A3 ê2 ê3 B3A1 ê3 ê1 +B3A2 ê3 ê2 +B3A3 ê3 ê3 are different dyadics. The scalar dot product of a dyad with a vector C is defined for both pre and post multiplication as φ · C = AB · C = A( B · C) C · φ = C · AB =(C · A) B These products are, in general, not equal. 52 Operations Using Tensors The following are some important tensor operations which are used to derive special equations and to prove various identities. Addition and Subtraction Tensors of the same type and weight can be added or subtracted. For example, two third order mixed tensors, when added, produce another third order mixed tensor. Let Aijk and B i jk denote two third order mixed tensors. Their sum is denoted Cijk = A i jk +B i jk. That is, like components are added. The sum is also a mixed tensor as we now verify. By hypothesis Aijk and Bijk are third order mixed tensors and hence must obey the transformation laws A i jk = A m np ∂xi ∂xm ∂xn ∂xj ∂xp ∂xk B i jk = B m np ∂xi ∂xm ∂xn ∂xj ∂xp ∂xk . We let C i jk = A i jk + B i jk denote the sum in the transformed coordinates. Then the addition of the above transformation equations produces C i jk = ( A i jk +B i jk ) = ( Amnp +B m np ) ∂xi ∂xm ∂xn ∂xj ∂xp ∂xk = Cmnp ∂xi ∂xm ∂xn ∂xj ∂xp ∂xk . Consequently, the sum transforms as a mixed third order tensor. Multiplication (Outer Product) The product of two tensors is also a tensor. The rank or order of the resulting tensor is the sum of the ranks of the tensors occurring in the multiplication. As an example, let Aijk denote a mixed third order tensor and let Blm denote a mixed second order tensor. The outer product of these two tensors is the fifth order tensor Ciljkm = A i jkB l m, i, j, k, l,m = 1, 2, . . . , N. Here all indices are free indices as i, j, k, l,m take on any of the integer values 1, 2, . . . , N. Let A i jk and B l m denote the components of the given tensors in the barred system of coordinates. We define C il jkm as the outer product of these components. Observe that Ciljkm is a tensor for by hypothesis A i jk and B l m are tensors and hence obey the transformation laws A α βγ = A i jk ∂xα ∂xi ∂xj ∂xβ ∂xk ∂xγ B δ # = B l m ∂xδ ∂xl ∂xm ∂x# . (1.2.55) The outer product of these components produces C αδ βγ# = A α βγB δ # = A i jkB l m ∂xα ∂xi ∂xj ∂xβ ∂xk ∂xγ ∂xδ ∂xl ∂xm ∂x# = Ciljkm ∂xα ∂xi ∂xj ∂xβ ∂xk ∂xγ ∂xδ ∂xl ∂xm ∂x# (1.2.56) which demonstrates that Ciljkm transforms as a mixed fifth order absolute tensor. Other outer products are analyzed in a similar way. 53 Contraction The operation of contraction on any mixed tensor of rank m is performed when an upper index is set equal to a lower index and the summation convention is invoked. When the summation is performed over the repeated indices the resulting quantity is also a tensor of rank or order (m − 2). For example, let Aijk, i, j, k = 1, 2, . . . , N denote a mixed tensor and perform a contraction by setting j equal to i.We obtain Aiik = A 1 1k +A 2 2k + · · ·+ANNk = Ak (1.2.57) where k is a free index. To show that Ak is a tensor, we let A i ik = Ak denote the contraction on the transformed components of Aijk. By hypothesis A i jk is a mixed tensor and hence the components must satisfy the transformation law A i jk = A m np ∂xi ∂xm ∂xn ∂xj ∂xp ∂xk . Now execute a contraction by setting j equal to i and perform a summation over the repeated index. We find A i ik = Ak = A m np ∂xi ∂xm ∂xn ∂xi ∂xp ∂xk = Amnp ∂xn ∂xm ∂xp ∂xk = Amnpδ n m ∂xp ∂xk = Annp ∂xp ∂xk = Ap ∂xp ∂xk . (1.2.58) Hence, the contraction produces a tensor of rank two less than the original tensor. Contractions on other mixed tensors can be analyzed in a similar manner. New tensors can be constructed from old tensors by performing a contraction on an upper and lower index. This process can be repeated as long as there is an upper and lower index upon which to perform the contraction. Each time a contraction is performed the rank of the resulting tensor is two less than the rank of the original tensor. Multiplication (Inner Product) The inner product of two tensors is obtained by: (i) first taking the outer product of the given tensors and (ii) performing a contraction on two of the indices. EXAMPLE 1.2-5. (Inner product) Let Ai and Bj denote the components of two first order tensors (vectors). The outer product of these tensors is Cij = A iBj , i, j = 1, 2, . . . , N. The inner product of these tensors is the scalar C = AiBi = A1B1 +A2B2 + · · ·+ANBN . Note that in some situations the inner product is performed by employing only subscript indices. For example, the above inner product is sometimes expressed as C = AiBi = A1B1 +A2B2 + · · ·ANBN . This notation is discussed later when Cartesian tensors are considered. 56 Figure 1.2-4. Cylindrical coordinates (r, β, z).  6. For the cylindrical coordinates (r, β, z) illustrated in the figure 1.2-4. (a) Write out the transformation equations from rectangular (x, y, z) coordinates to cylindrical (r, β, z) coordinates. Also write out the inverse transformation. (b) Determine the following basis vectors in cylindrical coordinates and represent your results in terms of cylindrical coordinates. (i) The tangential basis E1, E2, E3. (ii)The normal basis E1, E2, E3. (iii) êr, êβ , êz where êr, êβ, êz are normalized vectors in the directions of the tangential basis. (c) A vector A = Ax ê1 +Ay ê2 +Az ê3 can be represented in any of the forms: A = A1 E1 +A2 E2 +A3 E3 A = A1 E1 +A2 E2 +A3 E3 A = Arêr +Aβ êβ +Azêz depending upon the basis vectors selected . In terms of the components Ax, Ay, Az (i) Solve for the contravariant components A1, A2, A3. (ii) Solve for the covariant components A1, A2, A3. (iii) Solve for the components Ar, Aβ , Az. Express all results in cylindrical coordinates. (Note the components Ar, Aβ , Az are referred to as physical components. Physical components are considered in more detail in a later section.) 57 Figure 1.2-5. Spherical coordinates (ρ, α, β).  7. For the spherical coordinates (ρ, α, β) illustrated in the figure 1.2-5. (a) Write out the transformation equations from rectangular (x, y, z) coordinates to spherical (ρ, α, β) co- ordinates. Also write out the equations which describe the inverse transformation. (b) Determine the following basis vectors in spherical coordinates (i) The tangential basis E1, E2, E3. (ii) The normal basis E1, E2, E3. (iii) êρ, êα, êβ which are normalized vectors in the directions of the tangential basis. Express all results in terms of spherical coordinates. (c) A vector A = Ax ê1 +Ay ê2 +Az ê3 can be represented in any of the forms: A = A1 E1 +A2 E2 +A3 E3 A = A1 E1 +A2 E2 +A3 E3 A = Aρêρ +Aαêα +Aβ êβ depending upon the basis vectors selected . Calculate, in terms of the coordinates (ρ, α, β) and the components Ax, Ay, Az (i) The contravariant components A1, A2, A3. (ii) The covariant components A1, A2, A3. (iii) The components Aρ, Aα, Aβ which are called physical components.  8. Work the problems 6,7 and then let (x1, x2, x3) = (r, β, z) denote the coordinates in the cylindrical system and let (x1, x2, x3) = (ρ, α, β) denote the coordinates in the spherical system. (a) Write the transformation equations x → x from cylindrical to spherical coordinates. Also find the inverse transformations. ( Hint: See the figures 1.2-4 and 1.2-5.) (b) Use the results from part (a) and the results from problems 6,7 to verify that Ai = Aj ∂xj ∂xi for i = 1, 2, 3. (i.e. Substitute Aj from problem 6 to get Āi given in problem 7.) 58 (c) Use the results from part (a) and the results from problems 6,7 to verify that A i = Aj ∂xi ∂xj for i = 1, 2, 3. (i.e. Substitute Aj from problem 6 to get Āi given by problem 7.)  9. Pick two arbitrary noncolinear vectors in the x, y plane, say V1 = 5 ê1 + ê2 and V2 = ê1 + 5 ê2 and let V3 = ê3 be a unit vector perpendicular to both V1 and V2. The vectors V1 and V2 can be thought of as defining an oblique coordinate system, as illustrated in the figure 1.2-6. (a) Find the reciprocal basis (V 1, V 2, V 3). (b) Let r = x ê1 + y ê2 + z ê3 = αV1 + βV2 + γV3 and show that α = 5x 24 − y 24 β = − x 24 + 5y 24 γ = z (c) Show x = 5α+ β y = α+ 5β z = γ (d) For γ = γ0 constant, show the coordinate lines are described by α = constant and β = constant, and sketch some of these coordinate lines. (See figure 1.2-6.) (e) Find the metrics gij and conjugate metrices gij associated with the (α, β, γ) space. Figure 1.2-6. Oblique coordinates. 61  17. The equation of a plane is defined in terms of two parameters u and v and has the form xi = αi u+ βi v + γi i = 1, 2, 3, where αi βi and γi are constants. Find the equation of the plane which passes through the points (1, 2, 3), (14, 7,−3) and (5, 5, 5). What does this problem have to do with the position vector r(u, v), the vectors ∂'r ∂u , ∂'r ∂v and r(0, 0)? Hint: See problem 15.  18. Determine the points of intersection of the curve x1 = t, x2 = (t)2, x3 = (t)3 with the plane 8 x1 − 5 x2 + x3 − 4 = 0.  19. Verify the relations V eijk Ek = Ei × Ej and v−1 eijk Ek = Ei × Ej where v = E1 · ( E2 × E3) and V = E1 · ( E2 × E3)..  20. Let x̄i and xi, i = 1, 2, 3 be related by the linear transformation x̄i = cijxj , where cij are constants such that the determinant c = det(cij) is different from zero. Let γ n m denote the cofactor of c m n divided by the determinant c. (a) Show that cijγ j k = γ i jc j k = δ i k. (b) Show the inverse transformation can be expressed xi = γij x̄ j . (c) Show that if Ai is a contravariant vector, then its transformed components are Āp = cpqAq. (d) Show that if Ai is a covariant vector, then its transformed components are Āi = γ p i Ap.  21. Show that the outer product of two contravariant vectors Ai and Bi, i = 1, 2, 3 results in a second order contravariant tensor.  22. Show that for the position vector r = yi(x1, x2, x3) êi the element of arc length squared is ds2 = dr · dr = gijdxidxj where gij = Ei · Ej = ∂y m ∂xi ∂ym ∂xj .  23. For Aijk, Bmn and C p tq absolute tensors, show that if AijkB k n = Cijn then A i jkB k n = C i jn.  24. Let Aij denote an absolute covariant tensor of order 2. Show that the determinant A = det(Aij) is an invariant of weight 2 and √ (A) is an invariant of weight 1.  25. Let Bij denote an absolute contravariant tensor of order 2. Show that the determinant B = det(Bij) is an invariant of weight −2 and √B is an invariant of weight −1.  26. (a) Write out the contravariant components of the following vectors (i) E1 (ii) E2 (iii) E3 where Ei = ∂r ∂xi for i = 1, 2, 3. (b) Write out the covariant components of the following vectors (i) E1 (ii) E2 (ii) E3 where Ei = gradxi, for i = 1, 2, 3. 62  27. Let Aij and Aij denote absolute second order tensors. Show that λ = AijAij is a scalar invariant.  28. Assume that aij , i, j = 1, 2, 3, 4 is a skew-symmetric second order absolute tensor. (a) Show that bijk = ∂ajk ∂xi + ∂aki ∂xj + ∂aij ∂xk is a third order tensor. (b) Show bijk is skew-symmetric in all pairs of indices and (c) determine the number of independent components this tensor has.  29. Show the linear forms A1x + B1y + C1 and A2x + B2y + C2, with respect to the group of rotations and translations x = x cos θ − y sin θ + h and y = x sin θ + y cos θ + k, have the forms A1x+ B1y + C1 and A2x+B2y + C2. Also show that the quantities A1B2 −A2B1 and A1A2 +B1B2 are invariants.  30. Show that the curvature of a curve y = f(x) is κ = ± y′′(1+ y′2)−3/2 and that this curvature remains invariant under the group of rotations given in the problem 1. Hint: Calculate dydx = dy dx dx dx .  31. Show that when the equation of a curve is given in the parametric form x = x(t), y = y(t), then the curvature is κ = ± ẋÿ − ẏẍ (ẋ2 + ẏ2)3/2 and remains invariant under the change of parameter t = t(t), where ẋ = dxdt , etc.  32. Let Aijk denote a third order mixed tensor. (a) Show that the contraction A ij i is a first order contravariant tensor. (b) Show that contraction of i and j produces Aiik which is not a tensor. This shows that in general, the process of contraction does not always apply to indices at the same level.  33. Let φ = φ(x1, x2, . . . , xN ) denote an absolute scalar invariant. (a) Is the quantity ∂φ∂xi a tensor? (b) Is the quantity ∂ 2φ ∂xi∂xj a tensor?  34. Consider the second order absolute tensor aij , i, j = 1, 2 where a11 = 1, a12 = 2,a21 = 3 and a22 = 4. Find the components of aij under the transformation of coordinates x1 = x1 + x2 and x2 = x1 − x2.  35. Let Ai,Bi denote the components of two covariant absolute tensors of order one. Show that Cij = AiBj is an absolute second order covariant tensor.  36. Let Ai denote the components of an absolute contravariant tensor of order one and let Bi denote the components of an absolute covariant tensor of order one, show that Cij = A iBj transforms as an absolute mixed tensor of order two.  37. (a) Show the sum and difference of two tensors of the same kind is also a tensor of this kind. (b) Show that the outer product of two tensors is a tensor. Do parts (a) (b) in the special case where one tensor Ai is a relative tensor of weight 4 and the other tensor Bjk is a relative tensor of weight 3. What is the weight of the outer product tensor T ijk = A iBjk in this special case?  38. Let Aijkm denote the components of a mixed tensor of weight M . Form the contraction Bjm = A ij im and determine how Bjm transforms. What is its weight?  39. Let Aij denote the components of an absolute mixed tensor of order two. Show that the scalar contraction S = Aii is an invariant. 63  40. Let Ai = Ai(x1, x2, . . . , xN ) denote the components of an absolute contravariant tensor. Form the quantity Bij = ∂Ai ∂xj and determine if B i j transforms like a tensor.  41. Let Ai denote the components of a covariant vector. (a) Show that aij = ∂Ai ∂xj − ∂Aj ∂xi are the components of a second order tensor. (b) Show that ∂aij ∂xk + ∂ajk ∂xi + ∂aki ∂xj = 0.  42. Show that xi = K eijkAjBk, with K = 0 and arbitrary, is a general solution of the system of equations Aix i = 0, Bixi = 0, i = 1, 2, 3. Give a geometric interpretation of this result in terms of vectors.  43. Given the vector A = y ê1 + z ê2 + x ê3 where ê1, ê2, ê3 denote a set of unit basis vectors which define a set of orthogonal x, y, z axes. Let E1 = 3 ê1 + 4 ê2, E2 = 4 ê1 + 7 ê2 and E3 = ê3 denote a set of basis vectors which define a set of u, v, w axes. (a) Find the coordinate transformation between these two sets of axes. (b) Find a set of reciprocal vectors E1, E3, E3. (c) Calculate the covariant components of A. (d) Calculate the contravariant components of A.  44. Let A = Aij êi êj denote a dyadic. Show that A : Ac = A11A11 +A12A21 +A13A31 +A21A12 +A22A22 +A23A32 +A31A13 +A32A23 +A23A33  45. Let A = Ai êi, B = Bi êi, C = Ci êi, D = Di êi denote vectors and let φ = AB, ψ = C D denote dyadics which are the outer products involving the above vectors. Show that the double dot product satisfies φ : ψ = AB : C D = ( A · C)( B · D)  46. Show that if aij is a symmetric tensor in one coordinate system, then it is symmetric in all coordinate systems.  47. Write the transformation laws for the given tensors. (a) Akij (b) Aijk (c) Aijkm  48. Show that if Ai = Aj ∂xj ∂xi , then Ai = Aj ∂x j ∂xi . Note that this is equivalent to interchanging the bar and unbarred systems.  49. (a) Show that under the linear homogeneous transformation x1 =a11x1 + a 2 1x2 x2 =a12x1 + a 2 2x2 the quadratic form Q(x1, x2) = g11(x1)2 + 2g12x1x2 + g22(x2)2 becomes Q(x1, x2) = g11(x1) 2 + 2g12x1x2 + g22(x2) 2 where gij = g11a j 1a i 1 + g12(a i 1a j 2 + a j 1a i 2) + g22a i 2a j 2. (b) Show F = g11g22− (g12)2 is a relative invariant of weight 2 of the quadratic form Q(x1, x2) with respect to the group of linear homogeneous transformations. i.e. Show that F = ∆2F where F = g11g22−(g12)2 and ∆ = (a11a 2 2 − a21a12). 65 §1.3 SPECIAL TENSORS Knowing how tensors are defined and recognizing a tensor when it pops up in front of you are two different things. Some quantities, which are tensors, frequently arise in applied problems and you should learn to recognize these special tensors when they occur. In this section some important tensor quantities are defined. We also consider how these special tensors can in turn be used to define other tensors. Metric Tensor Define yi, i = 1, . . . , N as independent coordinates in an N dimensional orthogonal Cartesian coordinate system. The distance squared between two points yi and yi + dyi, i = 1, . . . , N is defined by the expression ds2 = dymdym = (dy1)2 + (dy2)2 + · · ·+ (dyN )2. (1.3.1) Assume that the coordinates yi are related to a set of independent generalized coordinates xi, i = 1, . . . , N by a set of transformation equations yi = yi(x1, x2, . . . , xN ), i = 1, . . . , N. (1.3.2) To emphasize that each yi depends upon the x coordinates we sometimes use the notation yi = yi(x), for i = 1, . . . , N. The differential of each coordinate can be written as dym = ∂ym ∂xj dxj , m = 1, . . . , N, (1.3.3) and consequently in the x-generalized coordinates the distance squared, found from the equation (1.3.1), becomes a quadratic form. Substituting equation (1.3.3) into equation (1.3.1) we find ds2 = ∂ym ∂xi ∂ym ∂xj dxidxj = gij dxidxj (1.3.4) where gij = ∂ym ∂xi ∂ym ∂xj , i, j = 1, . . . , N (1.3.5) are called the metrices of the space defined by the coordinates xi, i = 1, . . . , N. Here the gij are functions of the x coordinates and is sometimes written as gij = gij(x). Further, the metrices gij are symmetric in the indices i and j so that gij = gji for all values of i and j over the range of the indices. If we transform to another coordinate system, say xi, i = 1, . . . , N , then the element of arc length squared is expressed in terms of the barred coordinates and ds2 = gij dx idxj , where gij = gij(x) is a function of the barred coordinates. The following example demonstrates that these metrices are second order covariant tensors. 66 EXAMPLE 1.3-1. Show the metric components gij are covariant tensors of the second order. Solution: In a coordinate system xi, i = 1, . . . , N the element of arc length squared is ds2 = gijdxidxj (1.3.6) while in a coordinate system xi, i = 1, . . . , N the element of arc length squared is represented in the form ds2 = gmndx mdxn. (1.3.7) The element of arc length squared is to be an invariant and so we require that gmndx mdxn = gijdxidxj (1.3.8) Here it is assumed that there exists a coordinate transformation of the form defined by equation (1.2.30) together with an inverse transformation, as in equation (1.2.32), which relates the barred and unbarred coordinates. In general, if xi = xi(x), then for i = 1, . . . , N we have dxi = ∂xi ∂xm dxm and dxj = ∂xj ∂xn dxn (1.3.9) Substituting these differentials in equation (1.3.8) gives us the result gmndx mdxn = gij ∂xi ∂xm ∂xj ∂xn dxmdxn or ( gmn − gij ∂xi ∂xm ∂xj ∂xn ) dxmdxn = 0 For arbitrary changes in dxm this equation implies that gmn = gij ∂xi ∂xm ∂xj ∂xn and consequently gij transforms as a second order absolute covariant tensor. EXAMPLE 1.3-2. (Curvilinear coordinates) Consider a set of general transformation equations from rectangular coordinates (x, y, z) to curvilinear coordinates (u, v, w). These transformation equations and the corresponding inverse transformations are represented x = x(u, v, w) y = y(u, v, w) z = z(u, v, w). u = u(x, y, z) v = v(x, y, z) w = w(x, y, z) (1.3.10) Here y1 = x, y2 = y, y3 = z and x1 = u, x2 = v, x3 = w are the Cartesian and generalized coordinates and N = 3. The intersection of the coordinate surfaces u = c1,v = c2 and w = c3 define coordinate curves of the curvilinear coordinate system. The substitution of the given transformation equations (1.3.10) into the position vector r = x ê1 + y ê2 + z ê3 produces the position vector which is a function of the generalized coordinates and r = r(u, v, w) = x(u, v, w) ê1 + y(u, v, w) ê2 + z(u, v, w) ê3 67 and consequently dr = ∂r ∂u du+ ∂r ∂v dv + ∂r ∂w dw, where E1 = ∂r ∂u = ∂x ∂u ê1 + ∂y ∂u ê2 + ∂z ∂u ê3 E2 = ∂r ∂v = ∂x ∂v ê1 + ∂y ∂v ê2 + ∂z ∂v ê3 E3 = ∂r ∂w = ∂x ∂w ê1 + ∂y ∂w ê2 + ∂z ∂w ê3. (1.3.11) are tangent vectors to the coordinate curves. The element of arc length in the curvilinear coordinates is ds2 = dr · dr = ∂r ∂u · ∂r ∂u dudu+ ∂r ∂u · ∂r ∂v dudv + ∂r ∂u · ∂r ∂w dudw + ∂r ∂v · ∂r ∂u dvdu + ∂r ∂v · ∂r ∂v dvdv + ∂r ∂v · ∂r ∂w dvdw + ∂r ∂w · ∂r ∂u dwdu + ∂r ∂w · ∂r ∂v dwdv + ∂r ∂w · ∂r ∂w dwdw. (1.3.12) Utilizing the summation convention, the above can be expressed in the index notation. Define the quantities g11 = ∂r ∂u · ∂r ∂u g21 = ∂r ∂v · ∂r ∂u g31 = ∂r ∂w · ∂r ∂u g12 = ∂r ∂u · ∂r ∂v g22 = ∂r ∂v · ∂r ∂v g32 = ∂r ∂w · ∂r ∂v g13 = ∂r ∂u · ∂r ∂w g23 = ∂r ∂v · ∂r ∂w g33 = ∂r ∂w · ∂r ∂w and let x1 = u, x2 = v, x3 = w. Then the above element of arc length can be expressed as ds2 = Ei · Ej dxidxj = gijdxidxj , i, j = 1, 2, 3 where gij = Ei · Ej = ∂r ∂xi · ∂r ∂xj = ∂ym ∂xi ∂ym ∂xj , i, j free indices (1.3.13) are called the metric components of the curvilinear coordinate system. The metric components may be thought of as the elements of a symmetric matrix, since gij = gji. In the rectangular coordinate system x, y, z, the element of arc length squared is ds2 = dx2 + dy2 + dz2. In this space the metric components are gij =  1 0 00 1 0 0 0 1  . 70 Figure 1.3-2. Spherical coordinates. The coordinate curves, illustrated in the figure 1.3-3, are formed by the intersection of the coordinate surfaces x2 = −2ξ2(y − ξ 2 2 ) Parabolic cylinders x2 = 2η2(y + η2 2 ) Parabolic cylinders z = Constant Planes. Figure 1.3-3. Parabolic cylindrical coordinates in plane z = 0. 5. Parabolic coordinates (ξ, η, φ) x = ξη cosφ y = ξη sinφ z = 1 2 (ξ2 − η2) ξ ≥ 0 η ≥ 0 0 < φ < 2π h1 = √ ξ2 + η2 h2 = √ ξ2 + η2 h3 = ξη 71 The coordinate curves, illustrated in the figure 1.3-4, are formed by the intersection of the coordinate surfaces x2 + y2 = −2ξ2(z − ξ 2 2 ) Paraboloids x2 + y2 = 2η2(z + η2 2 ) Paraboloids y = x tanφ Planes. Figure 1.3-4. Parabolic coordinates, φ = π/4. 6. Elliptic cylindrical coordinates (ξ, η, z) x = cosh ξ cos η y = sinh ξ sin η z = z ξ ≥ 0 0 ≤ η ≤ 2π −∞ < z <∞ h1 = √ sinh2 ξ + sin2 η h2 = √ sinh2 ξ + sin2 η h3 = 1 The coordinate curves, illustrated in the figure 1.3-5, are formed by the intersection of the coordinate surfaces x2 cosh2 ξ + y2 sinh2 ξ = 1 Elliptic cylinders x2 cos2 η − y 2 sin2 η = 1 Hyperbolic cylinders z = Constant Planes. 72 Figure 1.3-5. Elliptic cylindrical coordinates in the plane z = 0. 7. Elliptic coordinates (ξ, η, φ) x = √ (1− η2)(ξ2 − 1) cosφ y = √ (1− η2)(ξ2 − 1) sinφ z = ξη 1 ≤ ξ <∞ − 1 ≤ η ≤ 1 0 ≤ φ < 2π h1 = √ ξ2 − η2 ξ2 − 1 h2 = √ ξ2 − η2 1− η2 h3 = √ (1 − η2)(ξ2 − 1) The coordinate curves, illustrated in the figure 1.3-6, are formed by the intersection of the coordinate surfaces x2 ξ2 − 1 + y2 ξ2 − 1 + z2 ξ2 = 1 Prolate ellipsoid z2 η2 − x 2 1− η2 − y2 1− η2 = 1 Two-sheeted hyperboloid y = x tanφ Planes 8. Bipolar coordinates (u, v, z) x = a sinh v cosh v − cosu, 0 ≤ u < 2π y = a sinu cosh v − cosu, −∞ < v <∞ z = z −∞ < z <∞ h21 = h 2 2 h22 = a2 (cosh v − cosu)2 h23 = 1 75 Figure 1.3-9. Prolate spheroidal coordinates 11. Oblate spheroidal coordinates (ξ, η, φ) x = a cosh ξ cos η cosφ, ξ ≥ 0 y = a cosh ξ cos η sinφ, −π 2 ≤ η ≤ π 2 z = a sinh ξ sin η, 0 ≤ φ ≤ 2π h21 = h 2 2 h22 = a 2(sinh2 ξ + sin2 η) h23 = a 2 cosh2 ξ cos2 η The coordinate curves, illustrated in the figure 1.3-10, are formed by the intersection of the coordinate surfaces x2 (a cosh ξ)2 + y2 (a cosh ξ)2 + z2 (a sinh ξ)2 = 1, Oblate ellipsoids x2 (a cos η)2 + y2 (a cos η)2 − z 2 (a sin η)2 = 1, One-sheet hyperboloids y = x tanφ, Planes. 12. Toroidal coordinates (u, v, φ) x = a sinh v cosφ cosh v − cosu, 0 ≤ u < 2π y = a sinh v sinφ cosh v − cosu, −∞ < v <∞ z = a sinu cosh v − cosu, 0 ≤ φ < 2π h21 = h 2 2 h22 = a2 (cosh v − cosu)2 h23 = a2 sinh2 v (cosh v − cosu)2 The coordinate curves, illustrated in the figure 1.3-11, are formed by the intersection of the coordinate surfaces x2 + y2 + ( z − a cosu sinu )2 = a2 sin2 u , Spheres(√ x2 + y2 − acosh v sinh v )2 + z2 = a2 sinh2 v , Torus y = x tanφ, planes 76 Figure 1.3-10. Oblate spheroidal coordinates Figure 1.3-11. Toroidal coordinates EXAMPLE 1.3-4. Show the Kronecker delta δij is a mixed second order tensor. Solution: Assume we have a coordinate transformation xi = xi(x), i = 1, . . . , N of the form (1.2.30) and possessing an inverse transformation of the form (1.2.32). Let δ i j and δij denote the Kronecker delta in the barred and unbarred system of coordinates. By definition the Kronecker delta is defined δ i j = δ i j = { 0, if i = j 1, if i = j . 77 Employing the chain rule we write ∂xm ∂xn = ∂xm ∂xi ∂xi ∂xn = ∂xm ∂xi ∂xk ∂xn δik (1.3.14) By hypothesis, the xi, i = 1, . . . , N are independent coordinates and therefore we have ∂x m ∂xn = δ m n and (1.3.14) simplifies to δ m n = δ i k ∂xm ∂xi ∂xk ∂xn . Therefore, the Kronecker delta transforms as a mixed second order tensor. Conjugate Metric Tensor Let g denote the determinant of the matrix having the metric tensor gij , i, j = 1, . . . , N as its elements. In our study of cofactor elements of a matrix we have shown that cof(g1j)g1k + cof(g2j)g2k + . . .+ cof(gNj)gNk = gδ j k. (1.3.15) We can use this fact to find the elements in the inverse matrix associated with the matrix having the components gij . The elements of this inverse matrix are gij = 1 g cof(gij) (1.3.16) and are called the conjugate metric components. We examine the summation gijgik and find: gijgik = g1jg1k + g2jg2k + . . .+ gNjgNk = 1 g [cof(g1j)g1k + cof(g2j)g2k + . . .+ cof(gNj)gNk] = 1 g [ gδjk ] = δjk The equation gijgik = δ j k (1.3.17) is an example where we can use the quotient law to show gij is a second order contravariant tensor. Because of the symmetry of gij and gij the equation (1.3.17) can be represented in other forms. EXAMPLE 1.3-5. Let Ai and Ai denote respectively the covariant and contravariant components of a vector A. Show these components are related by the equations Ai = gijAj (1.3.18) Ak = gjkAj (1.3.19) where gij and gij are the metric and conjugate metric components of the space. 80 Riemann Space VN A Riemannian space VN is said to exist if the element of arc length squared has the form ds2 = gijdxidxj (1.3.23) where the metrices gij = gij(x1, x2, . . . , xN ) are continuous functions of the coordinates and are different from constants. In the special case gij = δij the Riemannian space VN reduces to a Euclidean space EN . The element of arc length squared defined by equation (1.3.23) is called the Riemannian metric and any geometry which results by using this metric is called a Riemannian geometry. A space VN is called flat if it is possible to find a coordinate transformation where the element of arclength squared is ds2 = Fi(dxi)2 where each Fi is either +1 or −1. A space which is not flat is called curved. Geometry in VN Given two vectors A = Ai Ei and B = Bj Ej , then their dot product can be represented A · B = AiBj Ei · Ej = gijAiBj = AjBj = AiBi = gijAjBi = | A|| B| cos θ. (1.3.24) Consequently, in an N dimensional Riemannian space VN the dot or inner product of two vectors A and B is defined: gijA iBj = AjBj = AiBi = gijAjBi = AB cos θ. (1.3.25) In this definition A is the magnitude of the vector Ai, the quantity B is the magnitude of the vector Bi and θ is the angle between the vectors when their origins are made to coincide. In the special case that θ = 90◦ we have gijAiBj = 0 as the condition that must be satisfied in order that the given vectors Ai and Bi are orthogonal to one another. Consider also the special case of equation (1.3.25) when Ai = Bi and θ = 0. In this case the equations (1.3.25) inform us that ginAnAi = AiAi = ginAiAn = (A)2. (1.3.26) From this equation one can determine the magnitude of the vector Ai. The magnitudes A and B can be written A = (ginAiAn) 1 2 and B = (gpqBpBq) 1 2 and so we can express equation (1.3.24) in the form cos θ = gijA iBj (gmnAmAn) 1 2 (gpqBpBq) 1 2 . (1.3.27) An import application of the above concepts arises in the dynamics of rigid body motion. Note that if a vector Ai has constant magnitude and the magnitude of dA i dt is different from zero, then the vectors A i and dAi dt must be orthogonal to one another due to the fact that gijA i dAj dt = 0. As an example, consider the unit vectors ê1, ê2 and ê3 on a rotating system of Cartesian axes. We have for constants ci, i = 1, 6 that d ê1 dt = c1 ê2 + c2 ê3 d ê2 dt = c3 ê3 + c4 ê1 d ê3 dt = c5 ê1 + c6 ê2 because the derivative of any êi (i fixed) constant vector must lie in a plane containing the vectors êj and êk, (j = i , k = i and j = k), since any vector in this plane must be perpendicular to êi. 81 The above definition of a dot product in VN can be used to define unit vectors in VN . Definition: (Unit vector) Whenever the magnitude of a vec- tor Ai is unity, the vector is called a unit vector. In this case we have gijA iAj = 1. (1.3.28) EXAMPLE 1.3-8. (Unit vectors) In VN the element of arc length squared is expressed ds2 = gij dxidxj which can be expressed in the form 1 = gij dxi ds dxj ds . This equation states that the vector dxi ds , i = 1, . . . , N is a unit vector. One application of this equation is to consider a particle moving along a curve in VN which is described by the parametric equations xi = xi(t), for i = 1, . . . , N. The vector V i = dx i dt , i = 1, . . . , N represents a velocity vector of the particle. By chain rule differentiation we have V i = dxi dt = dxi ds ds dt = V dxi ds , (1.3.29) where V = dsdt is the scalar speed of the particle and dxi ds is a unit tangent vector to the curve. The equation (1.3.29) shows that the velocity is directed along the tangent to the curve and has a magnitude V. That is( ds dt )2 = (V )2 = gijV iV j . EXAMPLE 1.3-9. (Curvilinear coordinates) Find an expression for the cosine of the angles between the coordinate curves associated with the transformation equations x = x(u, v, w), y = y(u, v, w), z = z(u, v, w). 82 Figure 1.3-12. Angles between curvilinear coordinates. Solution: Let y1 = x, y2 = y, y3 = z and x1 = u, x2 = v, x3 = w denote the Cartesian and curvilinear coordinates respectively. With reference to the figure 1.3-12 we can interpret the intersection of the surfaces v = c2 and w = c3 as the curve r = r(u, c2, c3) which is a function of the parameter u. By moving only along this curve we have dr = ∂r ∂u du and consequently ds2 = dr · dr = ∂r ∂u · ∂r ∂u dudu = g11(dx1)2, or 1 = dr ds · dr ds = g11 ( dx1 ds )2 . This equation shows that the vector dx 1 ds = 1√ g11 is a unit vector along this curve. This tangent vector can be represented by tr(1) = 1√ g11 δr1 . The curve which is defined by the intersection of the surfaces u = c1 and w = c3 has the unit tangent vector tr(2) = 1√ g22 δr2. Similarly, the curve which is defined as the intersection of the surfaces u = c1 and v = c2 has the unit tangent vector tr(3) = 1√ g33 δr3 . The cosine of the angle θ12, which is the angle between the unit vectors tr(1) and t r (2), is obtained from the result of equation (1.3.25). We find cos θ12 = gpqt p (1)t q (2) = gpq 1√ g11 δp1 1√ g22 δq2 = g12√ g11 √ g22 . For θ13 the angle between the directions ti(1) and t i (3) we find cos θ13 = g13√ g11 √ g33 . Finally, for θ23 the angle between the directions ti(2) and t i (3) we find cos θ23 = g23√ g22 √ g33 . When θ13 = θ12 = θ23 = 90◦, we have g12 = g13 = g23 = 0 and the coordinate curves which make up the curvilinear coordinate system are orthogonal to one another. In an orthogonal coordinate system we adopt the notation g11 = (h1)2, g22 = (h2)2, g33 = (h3)2 and gij = 0, i = j. 85 Figure 1.3-15. Rotation of axes A similar situation exists in three dimensions. Consider two sets of Cartesian axes, say a barred and unbarred system as illustrated in the figure 1.3-14. Let us translate the origin 0 to 0 and then rotate the (x, y, z) axes until they coincide with the (x, y, z) axes. We consider first the rotation of axes when the origins 0 and 0 coincide as the translational distance can be represented by a vector bk, k = 1, 2, 3. When the origin 0 is translated to 0 we have the situation illustrated in the figure 1.3-15, where the barred axes can be thought of as a transformation due to rotation. Let r = x ê1 + y ê2 + z ê3 (1.3.37) denote the position vector of a variable point P with coordinates (x, y, z) with respect to the origin 0 and the unit vectors ê1, ê2, ê3. This same point, when referenced with respect to the origin 0 and the unit vectors ê1, ê2, ê3, has the representation r = x ê1 + y ê2 + z ê3. (1.3.38) By considering the projections of r upon the barred and unbarred axes we can construct the transformation equations relating the barred and unbarred axes. We calculate the projections of r onto the x, y and z axes and find: r · ê1 = x = x( ê1 · ê1) + y( ê2 · ê1) + z( ê3 · ê1) r · ê2 = y = x( ê1 · ê2) + y( ê2 · ê2) + z( ê3 · ê2) r · ê3 = z = x( ê1 · ê3) + y( ê2 · ê3) + z( ê3 · ê3). (1.3.39) We also calculate the projection of r onto the x, y, z axes and find: r · ê1 = x = x( ê1 · ê1) + y( ê2 · ê1) + z( ê3 · ê1) r · ê2 = y = x( ê1 · ê2) + y( ê2 · ê2) + z( ê3 · ê2) r · ê3 = z = x( ê1 · ê3) + y( ê2 · ê3) + z( ê3 · ê3). (1.3.40) By introducing the notation (y1, y2, y3) = (x, y, z) (y1, y2, y3) = (x, y, z) and defining θij as the angle between the unit vectors êi and êj , we can represent the above transformation equations in a more concise 86 form. We observe that the direction cosines can be written as 711 = ê1 · ê1 = cos θ11 721 = ê2 · ê1 = cos θ21 731 = ê3 · ê1 = cos θ31 712 = ê1 · ê2 = cos θ12 722 = ê2 · ê2 = cos θ22 732 = ê3 · ê2 = cos θ32 713 = ê1 · ê3 = cos θ13 723 = ê2 · ê3 = cos θ23 733 = ê3 · ê3 = cos θ33 (1.3.41) which enables us to write the equations (1.3.39) and (1.3.40) in the form yi = 7ijyj and yi = 7jiyj . (1.3.42) Using the index notation we represent the unit vectors as: êr = 7pr êp or êp = 7pr êr (1.3.43) where 7pr are the direction cosines. In both the barred and unbarred system the unit vectors are orthogonal and consequently we must have the dot products êr · êp = δrp and êm · ên = δmn (1.3.44) where δij is the Kronecker delta. Substituting equation (1.3.43) into equation (1.3.44) we find the direction cosines 7ij must satisfy the relations: êr · ês = 7pr êp · 7ms êm = 7pr7ms êp · êm = 7pr7msδpm = 7mr7ms = δrs and êr · ês = 7rm êm · 7sn ên = 7rm7sn êm · ên = 7rm7snδmn = 7rm7sm = δrs. The relations 7mr7ms = δrs and 7rm7sm = δrs, (1.3.45) with summation index m, are important relations which are satisfied by the direction cosines associated with a rotation of axes. Combining the rotation and translation equations we find yi = 7ijyj︸ ︷︷ ︸ rotation + bi︸︷︷︸ translation . (1.3.46) We multiply this equation by 7ik and make use of the relations (1.3.45) to find the inverse transformation yk = 7ik(yi − bi). (1.3.47) These transformations are called linear or affine transformations. Consider the xi axes as fixed, while the xi axes are rotating with respect to the xi axes where both sets of axes have a common origin. Let A = Ai êi denote a vector fixed in and rotating with the xi axes. We denote by d A dt ∣∣∣∣ f and d A dt ∣∣∣∣ r the derivatives of A with respect to the fixed (f) and rotating (r) axes. We can 87 write, with respect to the fixed axes, that d A dt ∣∣∣∣ f = dAi dt êi + Ai d êi dt . Note that d êi dt is the derivative of a vector with constant magnitude. Therefore there exists constants ωi, i = 1, . . . , 6 such that d ê1 dt = ω3 ê2 − ω2 ê3 d ê2 dt = ω1 ê3 − ω4 ê1 d ê3 dt = ω5 ê1 − ω6 ê2 i.e. see page 80. From the dot product ê1 · ê2 = 0 we obtain by differentiation ê1 · d ê2dt + d ê1dt · ê2 = 0 which implies ω4 = ω3. Similarly, from the dot products ê1 · ê3 and ê2 · ê3 we obtain by differentiation the additional relations ω5 = ω2 and ω6 = ω1. The derivative of A with respect to the fixed axes can now be represented d A dt ∣∣∣∣ f = dAi dt êi + (ω2A3 − ω3A2) ê1 + (ω3A1 − ω1A3) ê2 + (ω1A2 − ω2A1) ê3 = d A dt ∣∣∣∣ r + ω × A where ω = ωi êi is called an angular velocity vector of the rotating system. The term ω × A represents the velocity of the rotating system relative to the fixed system and d A dt ∣∣∣∣ r = dAi dt êi represents the derivative with respect to the rotating system. Employing the special transformation equations (1.3.46) let us examine how tensor quantities transform when subjected to a translation and rotation of axes. These are our special transformation laws for Cartesian tensors. We examine only the transformation laws for first and second order Cartesian tensor as higher order transformation laws are easily discerned. We have previously shown that in general the first and second order tensor quantities satisfy the transformation laws: Ai = Aj ∂yj ∂yi (1.3.48) A i = Aj ∂yi ∂yj (1.3.49) A mn = Aij ∂ym ∂yi ∂yn ∂yj (1.3.50) Amn = Aij ∂yi ∂ym ∂yj ∂yn (1.3.51) A m n = A i j ∂ym ∂yi ∂yj ∂yn (1.3.52) For the special case of Cartesian tensors we assume that yi and yi, i = 1, 2, 3 are linearly independent. We differentiate the equations (1.3.46) and (1.3.47) and find ∂yi ∂yk = 7ij ∂yj ∂yk = 7ijδjk = 7ik, and ∂yk ∂ym = 7ik ∂yi ∂ym = 7ikδim = 7mk. Substituting these derivatives into the transformation equations (1.3.48) through (1.3.52) we produce the transformation equations Ai = Aj7ji A i = Aj7ji A mn = Aij7im7jn Amn = Aij7im7jn A m n = A i j7im7jn. 90 It is readily verified that the reciprocal basis is E1 = γ ê1 − β ê2, E2 = α ê2, E3 = ê3. Consider the problem of representing the vector A = Ax ê1 +Ay ê2 in the contravariant vector form A = A1 E1 +A2 E2 or tensor form Ai, i = 1, 2. This vector has the contravariant components A1 = A · E1 = γAx − βAy and A2 = A · E2 = αAy . Alternatively, this same vector can be represented as the covariant vector A = A1 E1 +A2 E2 which has the tensor form Ai, i = 1, 2. The covariant components are found from the relations A1 = A · E1 = αAx A2 = A · E2 = βAx + γAy. The physical components of A in the directions E1 and E2 are found to be: A · E1 | E1| = A1 | E1| = γAx − βAy√ γ2 + β2 = A(1) A · E2 | E2| = A2 | E2| = αAy α = Ay = A(2). Note that these same results are obtained from the dot product relations using either form of the vector A. For example, we can write A · E1 | E1| = A1( E1 · E1) +A2( E2 · E1) | E1| = A(1) and A · E2 | E2| = A1( E1 · E2) +A2( E2 · E2) | E2| = A(2). In general, the physical components of a vector A in a direction of a unit vector λi is the generalized dot product in VN . This dot product is an invariant and can be expressed gijA iλj = Aiλi = Aiλi = projection of A in direction of λi 91 Physical Components For Orthogonal Coordinates In orthogonal coordinates observe the element of arc length squared in V3 is ds2 = gijdxidxj = (h1)2(dx1)2 + (h2)2(dx2)2 + (h3)2(dx3)2 where gij =  (h1)2 0 00 (h2)2 0 0 0 (h3)2  . (1.3.60) In this case the curvilinear coordinates are orthogonal and h2(i) = g(i)(i) i not summed and gij = 0, i = j. At an arbitrary point in this coordinate system we take λi, i = 1, 2, 3 as a unit vector in the direction of the coordinate x1. We then obtain λ1 = dx1 ds , λ2 = 0, λ3 = 0. This is a unit vector since 1 = gijλiλj = g11λ1λ1 = h21(λ 1)2 or λ1 = 1h1 . Here the curvilinear coordinate system is orthogonal and in this case the physical component of a vector Ai, in the direction xi, is the projection of Ai on λi in V3. The projection in the x1 direction is determined from A(1) = gijAiλj = g11A1λ1 = h21A 1 1 h1 = h1A1. Similarly, we choose unit vectors µi and νi, i = 1, 2, 3 in the x2 and x3 directions. These unit vectors can be represented µ1 =0, ν1 =0, µ2 = dx2 ds = 1 h2 , ν2 =0, µ3 =0 ν3 = dx3 ds = 1 h3 and the physical components of the vector Ai in these directions are calculated as A(2) = h2A2 and A(3) = h3A3. In summary, we can say that in an orthogonal coordinate system the physical components of a contravariant tensor of order one can be determined from the equations A(i) = h(i)A(i) = √ g(i)(i)A (i), i = 1, 2 or 3 no summation on i, which is a short hand notation for the physical components (h1A1, h2A2, h3A3). In an orthogonal coordinate system the nonzero conjugate metric components are g(i)(i) = 1 g(i)(i) , i = 1, 2, or 3 no summation on i. 92 These components are needed to calculate the physical components associated with a covariant tensor of order one. For example, in the x1−direction, we have the covariant components λ1 = g11λ1 = h21 1 h1 = h1, λ2 = 0, λ3 = 0 and consequently the projection in V3 can be represented gijA iλj = gijAigjmλm = Ajgjmλm = A1λ1g11 = A1h1 1 h21 = A1 h1 = A(1). In a similar manner we calculate the relations A(2) = A2 h2 and A(3) = A3 h3 for the other physical components in the directions x2 and x3. These physical components can be represented in the short hand notation A(i) = A(i) h(i) = A(i)√ g(i)(i) , i = 1, 2 or 3 no summation on i. In an orthogonal coordinate system the physical components associated with both the contravariant and covariant components are the same. To show this we note that when Aigij = Aj is summed on i we obtain A1g1j +A2g2j +A3g3j = Aj . Since gij = 0 for i = j this equation reduces to A(i)g(i)(i) = A(i), i not summed. Another form for this equation is A(i) = A(i)√g(i)(i) = A(i)√ g(i)(i) i not summed, which demonstrates that the physical components associated with the contravariant and covariant compo- nents are identical. NOTATION The physical components are sometimes expressed by symbols with subscripts which represent the coordinate curve along which the projection is taken. For example, let H i denote the contravariant components of a first order tensor. The following are some examples of the representation of the physical components of H i in various coordinate systems: orthogonal coordinate tensor physical coordinates system components components general (x1, x2, x3) Hi H(1), H(2), H(3) rectangular (x, y, z) H i Hx, Hy, Hz cylindrical (r, θ, z) H i Hr, Hθ, Hz spherical (ρ, θ, φ) H i Hρ, Hθ, Hφ general (u, v, w) H i Hu, Hv, Hw
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved