*219*
*89*
*14MB*

*English*
*Pages [149]*
*Year 2013*

*Table of contents : 1. Groups2. Subgroups3. Cosets4. Cyclic groups5. Permutation groups6. Conjugation in Sn7. Isomorphisms8. Homomorphisms and kernels9. Quotient Groups10. The isomorphism theorems11. The Alternating Groups12. Presentations and Groups of small order13. Sylow Theorems and applications14. Rings15. Basic Properties of Rings16. Ring Homomorphisms and Ideals17. Field of fractions18. Prime and Maximal Ideals19. Special Domains20. Euclidean Domains21. Polynomial rings22. A quick primality test23. Group actions and automorphisms*

1. Groups Suppose that we take an equilateral triangle and look at its symmetry group. There are two obvious sets of symmetries. First one can rotate the triangle through 120◦ . Suppose that we choose clockwise as the positive direction and denote rotation through 120◦ as R. It is natural to represent rotation through 240◦ as R2 , where we think of R2 as the eﬀect of applying R twice. If we apply R three times, represented by R3 , we would be back where we started. In other words we ought to include the trivial symmetry I, as a symmetry of the triangle (in just the same way that we think of zero as being a number). Note that the symmetry rotation through 120◦ anticlockwise, could be represented as R−1 . Of course this is the same as rotation through 240◦ clockwise, so that R−1 = R2 . The other obvious sets of symmetries are ﬂips. For example one can draw a vertical line through the top corner and ﬂip about this line. Call this operation F = F1 . Note that F 2 = I, representing the fact that ﬂipping twice does nothing. There are two other axes to ﬂip about, corresponding to the fact that there are three corners. Putting all this together we have F1

A

F2

C

B

F3

R

Figure 1. Symmetries of an equilateral triangle The set of symmetries we have created so far is then equal to { I, R, R2 , F1 , F2 , F3 }. Is this all? The answer is yes, and it is easy to see this, once one notices the following fact; any symmetry is determined by its action on the vertices of the triangle. In fact a triangle is determined by its vertices, so this is clear. Label the vertices A, B and C, where A starts at the top, B is the bottom right, and C is the bottom left. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Now in total there are at most six diﬀerent permutations of the letters A, B and C. We have already given six diﬀerent symmetries, so we must in fact have exhausted the list of symmetries. Note that given any two symmetries, we can always consider what happens when we apply ﬁrst one symmetry and then another. However note that the notation RF is ambiguous. Should we apply R ﬁrst and then F or F ﬁrst and then R? We will adopt the convention that RF means ﬁrst apply F and then apply R. Now RF is a symmetry of the triangle and we have listed all of them. Which one is it? Well the action of RF on the vertices will take A −→ A −→ B B −→ C −→ A C −→ B −→ C. In total then A is sent to B, B is sent to A and C is sent to C. As this symmetry ﬁxes one of the vertices, it must be a ﬂip. In fact it is equal to F3 . Let us now compute the symmetry F R. Well the action on the vertices is as follows A −→ B −→ C B −→ C −→ B C −→ A −→ A. So in total the action on the vertices is given as A goes to C, B goes to B and C goes to A. Again this symmetry ﬁxes the vertex B and so it is equal to F2 . Thus F3 = RF = F R = F2 . Let us step back a minute and consider what (algebraic) structure these examples give us. We are given a set (the set of symmetries) and an operation on this set, that is a rule that tells us how to multiply (in a formal sense) any two elements. We have an identity (the symmetry that does nothing). As this symmetry does nothing, composing with this symmetry does nothing (just as multiplying by the number one does nothing). Finally, given any symmetry there is an inverse symmetry which undoes the action of the symmetry (R represents rotation through 120◦ clockwise, and R−1 represents rotation through 120◦ anticlockwise, thus undoing the action of R). 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Deﬁnition 1.1. A group G is a set together with two operations (or more simply, functions), one called multiplication m : G × G −→ G and the other called the inverse i : G −→ G. These operations obey the following rules (1) Associativity: For every g, h and k ∈ G,

m(m(g, h), k) = m(g, m(h, k)).

(2) Identity: There is an element e in the group such that for every g ∈ G

m(g, e) = g

and m(e, g) = g. (3) Inverse: For every g ∈ G, m(g, i(g)) = e = m(i(g), g). It is customary to use diﬀerent (but equivalent) notation to denote the operations of multiplication and inverse. One possibility is to use the ordinary notation for multiplication m(x, y) = xy. The inverse is then denoted i(g) = g −1 .

The three rules above will then read as follows

(1)

(gh)k = g(hk).

(2)

ge = g = eg

(3) gg −1 = eg −1 g. Another alternative is to introduce a slight diﬀerent notation for the multiplication rule, something like ∗. In this case the three rules come out as (1)

(g ∗ h) ∗ k = g ∗ (h ∗ k).

(2)

g ∗ e = g = e ∗ g

(3)

g ∗ g −1 = e = g −1 ∗ g.

3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

The key thing to realise is that the multiplication rule need not have any relation to the more usual mutliplication rule of ordinary numbers. Let us see some examples of groups. Can we make the empty set into a group? How would we deﬁne the multiplication? Well the answer is that there is nothing to deﬁne, we just get the empty map. Is this empty map associative? The answer is yes, since there is nothing to check. Does there exist an identity? No, since the empty set does not have any elements at all. Thus there is no group whose underlying set is empty. Now suppose that we take a set with one element, call it a. The deﬁnition of the multiplication rule is obvious. We only need to know how to multiply a with a, m(a, a) = aa = a2 = a ∗ a = a. Is this multiplication rule associative? Well suppose that g, h and k are three elements of G. Then g = h = k = a. We compute the LHS, m(m(a, a), a) = m(a, a) = a. Similarly the RHS is m(a, m(a, a)) = m(a, a) = a. These two are equal and so this multiplication rule is associative. Is their an identity? Well there is only one element of the group, a. We have to check that if we multiply e = a by any other element g of the group then we get back g. The only possible choice for g is a. m(g, e) = m(a, a) = a = g, and m(e, g) = m(a, a) = a = g. So a acts as an identity. Finally does every element have an inverse? Pick an element g of the group G. In fact g = a. The only possiblity for an inverse of g is a. m(g, g −1 ) = m(a, a) = a = e. Similarly g −1 g = aa = a = e. So there is a unique rule of multiplication for a set with one element, and with this law of multiplication we get a group. Consider the set {a, b} and deﬁne a multiplication rule by aa = a ab = b ba = b bb = a 4 MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Here a plays the role of the identity. a and b are their own inverses. It is not hard to check that associativity holds and that we therefore get a group. To see some more examples of groups, it is ﬁrst useful to prove a general result about associativity. Lemma 1.2. Let f : A −→ B, g : B −→ C, h : C −→ D be three functions. Then h ◦ (g ◦ f ) = (h ◦ g) ◦ f. Proof. Both the LHS and RHS are functions from A −→ D. To prove that two such functions are equal, it suﬃces to prove that they give the same value, when applied to any element a ∈ A. (h ◦ (g ◦ f ))(a) = h((g ◦ f )(a)) = h(g(f (a))) Similarly ((h ◦ g) ◦ f ))(a) = (h ◦ g)(f (a)) = h(g(f (a))).

D

The set {I, R, R2 , F1 , F2 , F3 } is a group, where the multiplication rule is composition of symmetries. Any symmetry, can be interpreted as a function R2 −→ R2 , and composition of symmetries is just composition of functions. Thus this rule of multiplication is associative by (1.2). I plays the role an identity. Since we can undo any symmetry, every element of the group has an inverse. Deﬁnition 1.3. The dihedral group Dn of order n is the group of symmetries of a regular n-gon. With this notation, D3 is the group above, the set of symmetries of an equilateral triangle. The same proof as above shows that Dn is a group. Deﬁnition 1.4. We say that a group G is abelian, if for every g and h in G, gh = hg. The groups with one or two elements above are abelian. However D3 as we have already seen is not abelian. Thus not every group is abelian. Consider the set of whole numbers W = {1, 2, . . . } under addition. Is this a group? 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Lemma 1.5. Addition and multiplication of complex number is asso ciative. Proof. Well-known.

D

So addition of whole numbers is certainly associative. Is there an identity? No. So W is not a group under addition, since there is no identity. How about if we enlarge this set by adding 0, to get the of natural numbers N? In this case there is an identity, but there are no inverses. For example 1 has no inverse, since if you add a non-negative number to 1 you get something at least one. On the other hand (Z, +) is a group under addition, where Z is the set of integers. Similiarly Q, R, C are all groups under addition. How about under multiplication? First how about Z. Multiplication is associative, and there is an identity, one. However not every element has an inverse. For example, 2 does not have an inverse. What about Q under multiplication? Associativity is okay. Again one plays the role of the identity and it looks like every element has an inverse. Well not quite, since 0 has no inverse. Once one removes zero to get Q∗ , then we do get a group under multiplication. Similarly R∗ and C∗ are groups under multiplication. All of these groups are abelian. We can create some more interesting groups using these examples. Let Mm,n (C) denote m × n matrices, with entries in C. The multipli cation rule is addition of matrices (that is add corresponding entries). This operation is certainly associative, as this can be checked entry by entry. The zero matrix (that is the matrix with zeroes everywhere) plays the role of the identity. Given a matric A, the inverse matrix is −A, that is the matrix ob tained by changing the sign of every entry. Thus Mm,n (C) is a group under addition, which is easily seen to be abelian. We can the replace complex numbers by the reals, rationals or integers. GLn (C) denotes the set of n × n matrices, with non-zero determi nant. Multiplication is simply matrix multiplication. We check that this is a group. First note that a matrix corresponds to a (linear) func tion Rn −→ Rn , and under this identiﬁcation, matrix multiplication corresponds to composition of functions. Thus matrix multiplication is associative. The matrix with one’s on the main diagonal and zeroes everywhere else is the identity matrix. 6

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

For example, if n = 2, we get 1 0 . 0 1 The inverse of a matrix is constructed using Gaussian elimination. For a 2 × 2 matrix a b , c d it is easy to check that the inverse is given as 1 d −b . ad − bc −c a Note that we can replace the complex numbers by the reals or ratio nals. Note that D3 the group of symmetries, can be thought of as set of six matrices. In particular matrix multiplication is not abelian.

7

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

2. Subgroups Consider the chain of inclusions of groups Z ⊂ Q ⊂ R ⊂ C. where the law of multiplication is ordinary addition. Then each subset is a group, and the group laws are obviously com patible. That is to say that if you want to add two integers together, it does not matter whether you consider them as integers, rational num bers, real numbers or complex numbers, the sum always comes out the same. Deﬁnition 2.1. Let G be a group and let H be a subset of G. We say that H is a subgroup of G, if the restriction to H of the rule of multiplication and inverse makes H into a group. Notice that this deﬁnition hides a subtlety. More often than not, the restriction to H × H of m, the rule of multiplication of G, won’t even deﬁne a rule of multiplication on H itself, because there is no a priori reason for the product of two elements of H to be an element of H. For example suppose that G is the set of integers under addition, and H is the set of odd numbers. Then if you take two elements of H and add them, then you never get an element of H, since you will always get an even number. Similarly, the inverse of H need not be an element of H. For example take H to be the set of natural numbers. Then H is closed under addition (the sum of two positive numbers is positive) but the inverse of every element of H does not lie in H. Deﬁnition 2.2. Let G be a group and let S be subset of G. We say that S is closed under multiplication, if whenever a and b are in S, then the product of a and b is in S. We say that S is closed under taking inverses, if whenever a is in S, then the inverse of a is in S. For example, the set of even integers is closed under addition and taking inverses. The set of odd integers is not closed under addition (in a big way as it were) and it is closed under inverses. The natural numbers are closed under addition, but not under inverses. Proposition 2.3. Let H be a non-empty subset of G. Then H is a subgroup of G iﬀ H is closed under multiplication and taking inverses. Furthermore, the identity element of H is the identity element of G and the inverse of an element of H is equal to the inverse element in G. If G is abelian then so is H. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. If H is a subgroup of G, then H is closed under multiplication and taking inverses by deﬁnition. So suppose that H is closed under multiplication and taking inverses. Then there is a well deﬁned product on H. We check the axioms of a group for this product. Associativity holds for free. Indeed to say that the multiplication on H is associative, is to say that for all g, h and k ∈ H, we have (gh)k = g(hk). But g, h and k are elements of G and the associative rule holds in G. Hence equality holds above and multiplication is associative in H. We have to show that H contains an identity element. As H is non-empty we may pick a ∈ H. As H is closed under taking inverses, a−1 ∈ H. But then e = aa−1 ∈ H as H is closed under multiplication. So e ∈ H. Clearly e acts as an identity element in H as it is an identity element in G. Suppose that h ∈ H. Then h−1 ∈ H, as H is closed under taking inverses. But h−1 is clearly the inverse of h in H as it is the inverse in G. Finally if G is abelian then H is abelian. The proof follows just like the proof of associativity. D Example 2.4. (1) The set of even integers is a subgroup of the set of integers under addition. By (2.3) it suﬃces to show that the even integers are closed under addition and taking inverses, which is clear. (2) The set of natural numbers is not a subgroup of the group of integers under addition. The natural numbers are not closed under taking inverses. (3) The set of rotations of a regular n-gon is a subgroup of the group Dn of symmetries of a regular n-gon. By (2.3) it suﬃces to check that the set of rotations is closed under multiplication and inverse. Both of these are obvious. For example, suppose that R1 and R2 are two rotations, one through θ radians and the other through φ. Then the product is a rotation through θ + φ. On the other hand the inverse of R1 is rotation through 2π − θ. (4) The group Dn of symmetries of a regular n-gon is a subgroup of the set of invertible two by two matrices, with entries in R. Indeed any symmetry can be interpreted as a matrix. Since we have already seen that the set of symmetries is a group, it is in fact a subgroup. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

(5) The following subsets are subgroups. Mm,n (Z) ⊂ Mm,n (Q) ⊂ Mm,n (R) ⊂ Mm,n (C). (6) The following subsets are subgroups. GLn (Q) ⊂ GLn (R) ⊂ GLn (C). (7) It is interesting to enumerate the subgroups of D3 . At one ex treme we have D3 and at the other extreme we have {I}. Clearly the set of rotations is a subgroup, {I, R, R2 }. On the other hand {I, Fi } forms a subgroup as well, since Fi2 = I. Are these the only subgroups? Suppose that H is a subgroup that contains R. Then H must contain R2 and I, since H must contain all powers of R. Sim ilarly if H contains R2 , it must contain R4 = (R2 )2 . But R4 = R3 R = R. Suppose that in addition H contains a ﬂip. By symmetry, we may suppose that this ﬂip is F = F1 . But RF1 = F3 and F R = F2 . So then H would be equal to G. The ﬁnal possibility is that H contains two ﬂips, say F1 and F2 . Now F1 R = F2 , so R = F1−1 F2 = F1 F2 . So if H contains F1 and F2 then it is forced to contain R. In this case H = G as before. Here are some examples, which are less non-trivial. Deﬁnition-Lemma 2.5. Let G be a group and let g ∈ G be an element of G. The centraliser of g ∈ G is deﬁned to be Cg = { h ∈ G | hg = gh }. Then Cg is a subgroup of G. Proof. By (2.3) it suﬃces to prove that Cg is closed under multiplication and taking inverses. 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Suppoe that h and k are two elements of Cg . We show that the product hk is an element of Cg . We have to prove that (hk)g = g(hk). (hk)g = h(kg)

by associativity

= h(gk)

as k ∈ Cg

= (hg)k

by associativity

= (gh)k

as h ∈ Cg

= g(hk)

by associativity.

Thus hk ∈ Cg and Cg is closed under multiplication. Now suppose that h ∈ G. We show that the inverse of h is in G. We have to show that h−1 g = gh−1 . Suppose we start with the equality hg = gh. Multiply both sides by h−1 on the left. We get h−1 (hg) = h−1 (gh), so that simplifying we get g = (h−1 g)h. Now multiply both sides of this equality by h−1 on the right, gh−1 = (h−1 g)(hh−1 ). Simplifying we get ghg −1 = g −1 h, which is what we want. Thus h−1 ∈ Cg . Thus Cg is closed under taking D inverses and Cg is a subgroup by (2.3). Lemma 2.6. Let G be a ﬁnite group and let H be a non-empty ﬁnite set, closed under multiplication. Then H is a subgroup of G. Proof. It suﬃces to prove that H is closed under taking inverses. Let a ∈ H. If a = e then a−1 = e and this is obviously in H. So we may assume that a = e. Consider the powers of a, a, a2 , a3 , . . . . As H is closed under products, it is obviously closed under powers (by an easy induction argument). As H is ﬁnite and this is an inﬁnite sequence, we must get some repetitions, and so for some m and n distinct positive integers am = a n . 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Possibly switching m and n, we may assume m < n. Multiplying both sides by the inverse a−m of am , we get an−m = e. As a 6= e, n−m 6= 1. Set k = n−m−1. Then k > 0 and b = ak ∈ H. But ba = ak a = an−m−1 a = an−m = e. Similarly ab = e. Thus b is the inverse of a. Thus H is closed under taking inverses and so H is a subgroup of G by (2.3). D

5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

3. Cosets Consider the group of integers Z under addition. Let H be the subgroup of even integers. Notice that if you take the elements of H and add one, then you get all the odd elements of Z. In fact if you take the elements of H and add any odd integer, then you get all the odd elements. On the other hand, every element of Z is either odd or even, and certainly not both (by convention zero is even and not odd), that is, we can partition the elements of Z into two sets, the evens and the odds, and one part of this partition is equal to the original subset H. Somewhat surprisingly this rather trivial example generalises to the case of an arbitrary group G and subgroup H, and in the case of ﬁnite groups imposes rather strong conditions on the size of a subgroup. To go further, we need to recall some basic facts abouts partitions and equivalence relations. Deﬁnition 3.1. Let X be a set. An equivalence relation ∼ is a relation on X, which is (1) (reﬂexive) For every x ∈ X, x ∼ x. (2) (symmetric) For every x and y ∈ X, if x ∼ y then y ∼ x. (3) (transitive) For every x and y and z ∈ X, if x ∼ y and y ∼ z then x ∼ z. Example 3.2. Let S be any set and consider the relation a∼b

if and only if

a = b.

A moments thought will convince the reader this is an equivalence re lation. Let S be the set of people in this room and let a∼b

if and only if a and b have the same colour top.

Then ∼ is an equivalence relation. Let S = R and a∼b

if and only if

a ≥ b.

Then ∼ is reﬂexive and transitive but not symmetric. It is not an equivalence relation. Lemma 3.3. Let G be a group and let H be a subgroup. Let ∼ be the relation on G deﬁned by the rule a∼b

if and only if

b−1 a ∈ H.

Then ∼ is an equivalence relation. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. There are three things to check. First we check reﬂexivity. Sup pose that a ∈ G. Then a−1 a = e ∈ H, since H is a subgroup. But then a ∼ a by deﬁnition of ∼ and ∼ is reﬂexive. Now we check symmetry. Suppose that a and b are elements of G and that a ∼ b. Then b−1 a ∈ H. As H is closed under taking inverses, (b−1 a)−1 ∈ H. But (b−1 a)−1 = a−1 (b−1 )−1 = a−1 b. Thus a−1 b ∈ H. But then by deﬁnition b ∼ a. Thus ∼ is symmetric. Finally we check transitivity. Suppose that a ∼ b and b ∼ c. Then b−1 a ∈ H and c−1 b ∈ H. As H is closed under multiplication (c−1 b)(b−1 a) ∈ H. On the other hand (c−1 b)(b−1 a) = c−1 (bb−1 )a = c−1 (ea) = c−1 a. Thus c−1 a ∈ H. But then a ∼ c and ∼ is transitive. As ∼ is reﬂexive, symmetric and transitive, it is an equivalence re lation. D On the other hand if we are given an equivalence relation, the natural thing to do is to look at its equivalence classes. Deﬁnition 3.4. Let ∼ be an equivalence relation on a set X. Let a ∈ X be an element of X. The equivalence class of a is [a] = { b ∈ X | b ∼ a }. Example 3.5. In the examples (3.2), the equivalence classes in the ﬁrst example are the singleton sets, in the second example the equivalence classes are the colours. Deﬁnition 3.6. Let X be a set. A partition P of X is a collection of subsets Ai , i ∈ I, such that (1) The Ai cover X, that is,

Ai = X. i∈I

(2) The Ai are pairwise disjoint, that is, if i = j then Ai ∩ Aj = ∅. Lemma 3.7. Given an equivalence relation ∼ on X there is a unique partition of X. The elements of the partition are the equivalence classes of ∼ and vice-versa. That is, given a partition P of X we may construct 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

an equivalence relation ∼ on X such that the partition associated to ∼ is precisely P . Concisely, the data of an equivalence relation is the same as the data of a partition. Proof. Suppose that ∼ is an equivalence relation. Note that x ∈ [x] as x ∼ x. Thus certainly the set of equivalence classes covers X. The only thing to check is that if two equivalence classes intersect at all, then in fact they are equal. We ﬁrst prove a weaker result. We prove that if x ∼ y then [x] = [y]. Since y ∼ x, by symmetry, it suﬃces to prove that [x] ⊂ [y]. Suppose that a ∈ [x]. Then a ∼ x. As x ∼ y it follows that a ∼ y, by transitivity. But then a ∈ [y]. Thus [x] ⊂ [y] and by symmetry [x] = [y]. So suppose that x ∈ X and y ∈ X and that z ∈ [x] ∩ [y]. As z ∈ [x], z ∼ x. As z ∈ [y], z ∼ y. But then by what we just proved [x] = [z] = [y]. Thus if two equivalence classes overlap, then they coincide and we have a partition. Now suppose that we have a partition P = { Ai | i ∈ I }. Deﬁne a relation ∼ on X by the rule x ∼ y iﬀ x ∈ Ai and y ∈ Ai (same i, of course). That is, x and y are related iﬀ they belong to the same part. It is straightforward to check that this is an equivalence relation, and that this process reverses the one above. Both of these things are left as an exercise to the reader. D Example 3.8. Let X be the set of integers. Deﬁne an equivalence relation on Z by the rule x ∼ y iﬀ x − y is even. Then the equivalence classes of this relation are the even and odd numbers. More generally, let n be an integer, and let nZ be the subset consisting of all multiples of n, nZ = { an | a ∈ Z }. Since the sum of two multiples of n is a multiple of n, an + bn = (a + b)n, and the inverse of a multiple of n is a multiple of n, −(an) = (−a)n, nZ is closed under multiplication and inverses. Thus nZ is a subgroup of Z. 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

The equivalence relation corresponding to nZ becomes a ∼ b iﬀ a−b ∈ nZ, that is, a − b is a multiple of n. There are n equivalence classes, [0], [1], [2], [3], . . . [n − 1]. Deﬁnition-Lemma 3.9. Let G be a group, let H be a subgroup and let ∼ be the equivalence relation deﬁned in (3.3). Let g ∈ G. Then [g] = gH = { gh | h ∈ H }. gH is called a left coset of H. Proof. Suppose that k ∈ [g]. Then k ∼ g and so g −1 k ∈ H. If we set h = g −1 k, then h ∈ H. But then k = gh ∈ gH. Thus [g] ⊂ gH. Now suppose that k ∈ gH. Then k = gh for some h ∈ H. But then D h = g −1 k ∈ H. By deﬁnition of ∼, k ∼ g. But then k ∈ [g]. In the example above, we see that the left cosets are [0] = { an | a ∈ Z } [1] = { an + 1 | a ∈ Z } [2] = { an + 2 | a ∈ Z } .. . [n − 1] = { an − 1 | a ∈ Z }. It is interesting to see what happens in the case G = D3 . Suppose we take H = {I, R, R2 }. Then [I] = H = {I, R, R2 }.

/ H. Then

Pick F1 ∈ [F1 ] = F1 H = {F1 , F2 , F3 }.

Thus H partitions G into two sets, the rotations, and the ﬂips,

{{I, R, R2 }, {F1 , F2 , F3 }}.

Note that both sets have the same size. Now suppose that we take H = {I, F1 } (up to the obvious symme tries, this is the only other interesting example). In this case [I] = IH = H = {I, F1 }. Now R is not in this equivalence class, so [R] = RH = {R, RF1 } = {R, F2 }. Finally look at the equivalence class containing R2 . [R2 ] = R2 H = {R2 , R2 F1 } = {R2 , F3 }. 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

The corresponding partition is {{I, F1 }, {R, F2 }, {R2 , F3 }}. Note that, once again, each part of the partition has the same size. Deﬁnition 3.10. Let G be a group and let H be a subgroup. The index of H in G, denoted [G : H], is equal to the number of left cosets of H in G. Note that even though G might be inﬁnite, the index might still be ﬁnite. For example, suppose that G is the group of integers and let H be the subgroup of even integers. Then there are two cosets (evens and odds) and so the index is two. We are now ready to state our ﬁrst Theorem. Theorem 3.11. (Lagrange’s Theorem) Let G be a group. Then |H|[G : H] = |G|. In particular if G is ﬁnite then the order of H divides the order of G. Proof. Since G is a disjoint union of its left cosets, it suﬃces to prove that the cardinality of each coset is equal to the cardinality of H. Suppose that gH is a left coset of H in G. Deﬁne a map A : H −→ gH, by sending h ∈ H to A(h) = gh. Deﬁne a map B : gH −→ H, by sending k ∈ gH to B(k) = g −1 k. These maps are both clearly well-deﬁned. We show that B is the inverse of A. We ﬁrst compute B ◦ A : H −→ H. Suppose that h ∈ H, then (B ◦ A)(h) = B(A(h)) = B(gh) = g −1 (gh) = h. Thus B ◦ A : H −→ H is certainly the identity map. Now consider A ◦ B : gH −→ gH. 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Suppose that k ∈ gH, then (A ◦ B)(k) = A(B(k)) = A(g −1 k) = g(g −1 k) = k. Thus B is indeed the inverse of A. In particular A must be a bijection and so H and gH must have the same cardinality. D

6

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

4. Cyclic groups Lemma 4.1. Let G be a group and let Hi , i ∈ I be a collection of subgroups of G. Then the intersection H= Hi , i∈I

is a subgroup of G Proof. First note that H is non-empty, as the identity belongs to every Hi . We have to check that H is closed under products and inverses. Suppose that g and h are in H. Then g and h are in Hi , for all i. But then hg ∈ Hi for all i, as Hi is closed under products. Thus gh ∈ H. Similarly as Hi is closed under taking inverses, g −1 ∈ Hi for all i ∈ I. But then g −1 ∈ H. Thus H is indeed a subgroup. D Deﬁnition-Lemma 4.2. Let G be a group and let S be a subset of G. The subgroup H = (S) generated by S is equal to the smallest subgroup of G that contains S. Proof. The only thing to check is that the word smallest makes sense. Suppose that Hi , i ∈ I is the collection of subgroups that contain S. By (4.1), the intersection H of the Hi is a subgroup of G. On the other hand H obviously contains S and it is contained in each Hi . Thus H is the smallest subgroup that contains S. D Lemma 4.3. Let S be a non-empty subset of G. Then the subgroup H generated by S is equal to the smallest subset of G, containing S, that is closed under taking products and inverses. Proof. Let K be the smallest subset of G, closed under taking products and inverses. As H is closed under taking products and inverses, it is clear that H must contain K. On the other hand, as K is a subgroup of G, K must contain H. But then H = K. D Deﬁnition 4.4. Let G be a group. We say that a subset S of G gen erates G, if the smallest subgroup of G that contains S is G itself. Deﬁnition 4.5. Let G be a group. We say that G is cyclic if it is generated by one element. Let G = (a) be a cyclic group. By (4.3) G = { ai | i ∈ Z }. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Deﬁnition 4.6. Let G be a group and let g ∈ G be an element of G. The order of g is equal to the cardinality of the subgroup generated by g. Lemma 4.7. Let G be a ﬁnite group and let g ∈ G. Then the order of g divides the order of G. Proof. Immediate from Lagrange’s Theorem.

D

Lemma 4.8. Let G be a group of prime order. Then G is cyclic. Proof. If the order of G is one, there is nothing to prove. Otherwise pick an element g of G not equal to the identity. As g is not equal to the identity, its order is not one. As the order of g divides the order of G and this is prime, it follows that the order of g is equal to the order of G. But then G = (g) and G is cyclic. D It is interesting to go back to the problem of classifying groups of ﬁnite order and see how these results change our picture of what is going on. Now we know that every group of order 1, 2, 3 and 5 must be cyclic. Suppose that G has order 4. There are two cases. If G has an element a of order 4, then G is cyclic. We get the following group table. ∗ e a a2 a3 e e a a2 a3 a a a 2 a3 e a2 a2 a3 e a a3 a3 e a a2 Replacing a2 by b, a3 by c we get ∗

e a b c

e e a b c

a a b c e

b b c e a

c c e a b

Now suppose that G does not contain any elements of order 4. Since the order of every element divides 4, the order of every element must be 1, 2 or 4. On the other hand, the only element of order 1 is the identity element. Thus if G does not have an element of order 4, then every element, other than the identity, must have order 2. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

In other words, every element is its own inverse.

∗

e a b c

e a b c e a b c a e ? b e c e

Now ? must in fact be c, simply by a process of elimination. In fact we must put c somewhere in the row that contains a and we cannot put it in the last column, as this already contains c. Continuing in this way, it turns out there is only one way to ﬁll in the whole table ∗

e a b c

e

e a b c

a a e c b

b b c e a

c c b a e

So now we have a complete classiﬁcation of all ﬁnite groups up to order ﬁve (it easy to see that there is a cyclic group of any order; just take the rotations of a regular n-gon). If the order is not four, then the only possibility is a cyclic group of that order. Otherwise the order is four and there are two possibilities. Either G is cyclic. In this case there are two elements of order 4 (a and a3 ) and one element of order two (a2 ). Otherwise G has three elements of order two. Note however that G is abelian. So the ﬁrst non-abelian group has order six (equal to D3 ). One reason that cyclic groups are so important, is that any group G contains lots of cyclic groups, the subgroups generated by the ele ments of G. On the other hand, cyclic groups are reasonably easy to understand. First an easy lemma about the order of an element. Lemma 4.9. Let G be a group and let g ∈ G be an element of G. Then the order of g is the smallest positive number k, such that k a = e. Proof. Replacing G by the subgroup (g) generated by g, we might as well assume that G is cyclic, generated by g. Suppose that g l = e. I claim that in this case G = { e, g, g 2 , g 3 , g 4 , . . . , g l−1 }. Indeed it suﬃces to show that the set is closed under multiplication and taking inverses. Suppose that g i and g j are in the set. Then g i g j = g i+j . If i + j < l there is nothing to prove. If i + j ≥ l, then use the fact that g l = e to 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

rewrite g i+j as g i+j−l . In this case i + j − l > 0 and less than l. So the set is closed under products. Given g i , what is its inverse? Well g l−i g i = g l = e. So g l−i is the inverse of g i . Alternatively we could simply use the fact that H is ﬁnite, to conclude that it must be closed under taking inverses. Thus |G| ≤ l and in particular |G| ≤ k. In particular if G is inﬁnite, there is no integer k such that g k = e and the order of g is inﬁnite and the smallest k such that g k = e is inﬁnity. Thus we may assume that the order of g is ﬁnite. Suppose that |G| < k. Then there must be some repetitions in the set { e, g, g 2 , g 3 , g 4 , . . . , g k−1 }. Thus g a = g b for some a = b between 0 and k − 1. Suppose that a < b. Then g b−a = e. But this contradicts the fact that k is the smallest D integer such that g k = e. Lemma 4.10. Let G be a ﬁnite group of order n and let g be an element of G. Then g n = e. Proof. We know that g k = e where k is the order of g. But k divides n. So n = km. But then g n = g km = (g k )m = em = e.

D

Lemma 4.11. Let G be a cyclic group, generated by a. Then (1) G is abelian. (2) If G is inﬁnite, the elements of G are precisely . . . a−3 , a−2 , a−1 , e, a, a2 , a3 , . . . (3) If G is ﬁnite, of order n, then the elements of G are precisely e, a, a2 , . . . , an−2 , an−1 , and an = e. Proof. We ﬁrst prove (1). Suppose that g and h are two elements of G. As G is generated by a, there are integers m and n such that g = am and h = an . Then gh = a m an = am+n = an+m = hg. 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Thus G is abelian. Hence (1). (2) and (3) follow from (4.9).

D

Note that we can easily write down a cyclic group of order n. The group of rotations of an n-gon forms a cyclic group of order n. Indeed any rotation may be expressed as a power of a rotation R through 2π/n. On the other hand, Rn = 1. However there is another way to write down a cyclic group of order n. Suppose that one takes the integers Z. Look at the subgroup nZ. Then we get equivalence classes modulo n, the left cosets. [0], [1], [2], [3], . . . , [n − 1]. I claim that this is a group, with a natural method of addition. In fact I deﬁne [a] + [b] = [a + b]. in the obvious way. However we need to check that this is well-deﬁned. The problem is that the notation [a] is somewhat ambiguous, in the sense that there are inﬁnitely many numbers a' such that [a' ] = [a]. In other words, if the diﬀerence a' − a is a multiple of n then a and a' represent the same equivalence class. For example, suppose that n = 3. Then [1] = [4] and [5] = [−1]. So there are two ways to calculate [1] + [5]. One way is to add 1 and 5 and take the equivalence class. [1] + [5] = [6]. On the other hand we could compute [1] + [5] = [4] + [−1] = [3]. Of course [6] = [3] = [0] so we are okay. So now suppose that a' is equal to a modulo n and b' is equal to b modulo n. This means a' = a + pn and b' = b + qn, where p and q are integers. Then a' + b' = (a + pn) + (b + qn) = (a + b) + (p + q)n. So we are okay [a + b] = [a' + b' ], 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

and addition is well-deﬁned. The set of left cosets with this law of addition is denote Z/nZ, the integers modulo n. Is this a group? Well associativity comes for free. As ordinary addition is associative, so is addition in the integers modulo n. [0] obviously plays the role of the identity. That is [a] + [0] = [a + 0] = [a]. Finally inverses obviously exist. Given [a], consider [−a]. Then [a] + [−a] = [a − a] = [0]. Note that this group is abelian. In fact it is clear that it is generated by [1], as 1 generates the integers Z. How about the integers modulo n under multiplication? There is an obvious choice of multiplication. [a] · [b] = [a · b]. Once again we need to check that this is well-deﬁned. Exercise left for the reader. Do we get a group? Again associativity is easy, and [1] plays the role of the identity. Unfortunately, inverses don’t exist. For example [0] does not have an inverse. The obvious thing to do is throw away zero. But even then there is a problem. For example, take the integers modulo 4. Then [2] · [2] = [4] = [0]. So if you throw away [0] then you have to throw away [2]. In fact given n, you should throw away all those integers that are not coprime to n, at the very least. In fact this is enough. Deﬁnition-Lemma 4.12. Let n be a positive integer. The group of units, Un , for the integers modulo n is the subset of Z/nZ of integers coprime to n, under multiplication. Proof. We check that Un is a group. First we need to check that Un is closed under multiplication. Sup pose that [a] ∈ Un and [b] ∈ Un . Then a and b are coprime to n. This means that if a prime p divides n, then it does not divide a or b. But then p does not divide ab. As this is true for all primes that divide n, it follows that ab is coprime to n. But then [ab] ∈ Un . Hence multiplication is well-deﬁned. 6

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

This rule of multiplication is clearly associative. Indeed suppose that [a], [b] and [c] ∈ Un . Then ([a] · [b]) · [c] = [ab] · c = [(ab)c] = [a(bc)] = [a] · [bc] = [a] · ([b] · [c]). So multiplication is associative. Now 1 is coprime to n. But then [1] ∈ Un and this clearly plays the role of the identity. Now suppose that [a] ∈ Un . We need to ﬁnd an inverse of [a]. We want an integer b such that [ab] = 1. This means that ab + mn = 1,

for some integer m. But a and n are coprime. So by Euclid’s algorithm,

such integers exist. D

Deﬁnition 4.13. The Euler φ function is the function ϕ(n) which assigns the order of Un to n.

Lemma 4.14. Let a be any integer, which is coprime to the positive

integer n.

Then aφ(n) = 1 mod n. Proof. Let g = [a] ∈ Un . By (4.10) g φ(n) = e. But then [aφ (n)] = [1]. Thus aφ(n) = 1

mod n.

D

Given this, it would be really nice to have a quick way to compute ϕ(n). Lemma 4.15. The Euler ϕ function is multiplicative. That is, if m and n are coprime positive integers, ϕ(mn) = ϕ(m)ϕ(n). Proof. We will prove this later in the course.

D

7

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Given (4.15), and the fact that any number can be factored, it suﬃces to compute ϕ(pk ), where p is prime and k is a positive integer. Consider ﬁrst ϕ(p). Well every number between 1 and p − 1 is auto matically coprime to p. So ϕ(p) = p − 1. Theorem 4.16. (Fermat’s Little Theorem) Let a be any integer. Then ap = a mod p. In particular ap−1 = 1 mod p if a is coprime to p. Proof. Follows from (4.14).

D

How about ϕ(pk )? Let us do an easy example. Suppose we take p = 3, k = 2. Then of the eight numbers between 1 and 8, two are multiples of 3, 3 and 6 = 2 · 3. More generally, if a number between 1 and pk − 1 is not coprime to p, then it is a multiple of p. But there are pk−1 − 1 such multiples, p = 1 · p, 2p, 3p, . . . (pk−1 − 1)p. Thus (pk − 1) − (pk−1 − 1) − pk − pk−1 numbers between 1 and pk are coprime to p. We have proved Lemma 4.17. Let p be a prime number. Then ϕ(pk ) = pk − pk−1 . Example 4.18. What is the order of U5000 ? 5000 = 5 · 1000 = 5 · (10)3 = 54 · 23 . Now ϕ(23 ) = 23 − 22 = 4, and ϕ(54 ) = 54 − 53 = 53 (4) = 125 · 4. As the Euler-phi function is multiplicative, we get ϕ(5000) = 4 · 4 · 125 = 24 · 53 = 2000. It is also interesting to see what sort of groups one gets. For example, what is U6 ? ϕ(6) = ϕ(2)ϕ(3) = 1 · 2 = 2. Thus we get a cyclic group of order 2. In fact 1 and 5 are the only numbers coprime to 6. 52 = 25 = 1

mod 6.

How about U8 ? Well ϕ(8) = 4. So either U8 is either cyclic of order 4, or every element has order 2. 1, 3, 5 and 7 are the numbers coprime to 2. Now 32 = 9 = 1

mod 8,

8

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

52 = 25 = 1

mod 8,

72 = 49 = 1

mod 8.

and So [3]2 = [5]2 = [7]2 = [1] and every element of U8 , other than the identity, has order two. But then U8 cannot be cyclic.

9

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

5. Permutation groups

Deﬁnition 5.1. Let S be a set. A permutation of S is simply a bijection f : S −→ S. Lemma 5.2. Let S be a set. (1) Let f and g be two permutations of S. Then the composition of f and g is a permutation of S. (2) Let f be a permutation of S. Then the inverse of f is a permu tation of S. Proof. Well-known.

D

Lemma 5.3. Let S be a set. The set of all permutations, under the operation of composition of permutations, forms a group A(S). Proof. (5.2) implies that the set of permutations is closed under com position of functions. We check the three axioms for a group. We already proved that composition of functions is associative. Let i : S −→ S be the identity function from S to S. Let f be a permutation of S. Clearly f ◦ i = i ◦ f = f . Thus i acts as an identity. Let f be a permutation of S. Then the inverse g of f is a permutation of S by (5.2) and f ◦ g = g ◦ f = i, by deﬁnition. Thus inverses exist and G is a group. D Lemma 5.4. Let S be a ﬁnite set with n elements. Then A(S) has n! elements. Proof. Well-known.

D

Deﬁnition 5.5. The group Sn is the set of permutations of the ﬁrst n natural numbers. We want a convenient way to represent an element of Sn . The ﬁrst way, is to write an element σ of Sn as a matrix. 1 2 3 4 5 3 1 5 4 2

∈ S5 .

Thus, for example, σ(3) = 5. With this notation it is easy to write down products and inverses. For example suppose that σ=

1 2 3 4 5 3 1 5 4 2

τ=

1 2 3 4 5 . 4 3 1 2 5

Then τσ =

1 2 3 4 5 . 1 4 5 2 3 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

On the other hand 1 2 3 4 5 στ = . 4 5 3 1 2

In particular S5 is not abelian. The problem with this way of representing elements of Sn is that we don’t see much of the structure of τ this way. For example, it is very hard to ﬁgure out the order of τ from this representation. Deﬁnition 5.6. Let τ be an element of Sn . We say that τ is a k-cycle if there are integers a1 , a2 , . . . , ak such that τ (a1 ) = a2 , τ (a2 ) = a3 , . . . , and τ (ak ) = a1 and τ ﬁxes every other integer. More compactly

⎧ ⎪ ⎨ai+1 i < k τ (ai ) = a1 i=k ⎪

⎩ otherwise. ai For example

1 2 3 4

2 3 4 1

is a 4-cycle in S4 and 1 2 3 4 5 . 1 5 3 2 4 is a 3-cycle in S5 . Now given a k-cycle τ , there is an obvious way to represent it, which is much more compact than the ﬁrst notation.

τ = (a1 , a2 , a3 , . . . , ak ). Thus the two examples above become, (1, 2, 3, 4) and (2, 5, 4). Note that there is some redundancy. For example, obviously (2, 5, 4) = (5, 4, 2) = (4, 2, 5). Note that a k-cycle has order k. Deﬁnition-Lemma 5.7. Let σ be any element of Sn . Then σ may be expressed as a product of disjoint cycles. This fac torisation is unique, ignoring 1-cycles, up to order. The cycle type of σ is the lengths of the corresponding cycles. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. We ﬁrst prove the existence of such a decomposition. Let a1 = 1 and deﬁne ak recursively by the formula ai+1 = σ(ai ). Consider the set { ai | i ∈ N }. As there are only ﬁnitely many integers between 1 and n, we must have some repetitions, so that ai = aj , for some i < j. Pick the 6 1. Then smallest i and j for which this happens. Suppose that i = σ(ai−1 ) = ai = σ(aj−1 ). As σ is injective, ai−1 = aj−1 . But this contradicts our choice of i and j. Let τ be the k-cycle (a1 , a2 , . . . , aj ). Then ρ = στ −1 ﬁxes each element of the set { ai | i ≤ j }. Thus by an obvious induction, we may assume that ρ is a product of k − 1 disjoint cycles τ1 , τ2 , . . . , τk−1 which ﬁx this set. But then σ = ρτ = τ1 τ2 . . . τk , where τ = τk . Now we prove uniqueness. Suppose that σ = σ1 σ2 . . . σk and σ = τ1 τ2 . . . τl are two factorisations of σ into disjoint cycles. Suppose that 6 i. By disjointness, in fact τp (i) = j. σ1 (i) = j. Then for some p, τp (i) = Now consider σ1 (j). By the same reasoning, τp (j) = σ1 (j). Continuing in this way, we get σ1 = τp . But then just cancel these terms from both sides and continue by induction. D Example 5.8. Let 1 2 3 4 5 σ= . 3 4 1 5 2 Look at 1. 1 is sent to 3. But 3 is sent back to 1. Thus part of the cycle decomposition is given by the transposition (1, 3). Now look at what is left {2, 4, 5}. Look at 2. Then 2 is sent to 4. Now 4 is sent to 5. Finally 5 is sent to 2. So another part of the cycle type is given by the 3-cycle (2, 4, 5). I claim then that

σ = (1, 3)(2, 4, 5) = (2, 4, 5)(1, 3). This is easy to check. The cycle type is (2, 3). As promised, it is easy to compute the order of a permutation, given its cycle type. Lemma 5.9. Let σ ∈ Sn be a permutation, with cycle type (k1 , k2 , . . . , kl ). The the order of σ is the least common multiple of k1 , k2 , . . . , kl . 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. Let k be the order of σ and let σ = τ1 τ2 . . . τl be the decompo sition of σ into disjoint cycles of lengths k1 , k2 , . . . , kl . Pick any integer h. As τ1 , τ2 , . . . , τl are disjoint, it follows that σ h = τ1h τ2h . . . τlh . Moreover the RHS is equal to the identity, iﬀ each individual term is equal to the identity. It follows that τik = e. In particular ki divides k. Thus the least common multiple, m of k1 , k2 , . . . , kl divides k. But σ m = τ1m τ2m τ3m . . . τlm = e. Thus m divides k and so k = m. D Note that (5.7) implies that the cycles generate Sn . It is a natural question to ask if there is a smaller subset which generates Sn . In fact the 2-cycles generate. Lemma 5.10. The transpositions generate Sn . Proof. It suﬃces to prove that every permutation is a product of trans positions. We give two proofs of this fact. Here is the ﬁrst proof. As every permutation σ is a product of cycles, it suﬃces to check that every cycle is a product of transpositions. Consider the k-cycle σ = (a1 , a2 , . . . , ak ). I claim that this is equal to σ = (a1 , ak )(a1 , ak−1 )(a1 , ak−2 ) · · · (a1 , a2 ). It suﬃces to check that they have the same eﬀect on every integer j between 1 and n. Now if j is not equal to any of the ai , there is nothing to check as both sides ﬁx j. Suppose that j = ai . Then σ(j) = ai+1 . On the other hand the transposition (a1 , ai ) sends j to a1 , the ones befores this do nothing to j, and the next transposition then sends a1 to ai+1 . No other of the remaining transpositions have any eﬀect on ai+1 . Therefore the RHS also sends j = ai to ai+1 . As both sides have the same eﬀect on j, they are equal. This completes the ﬁrst proof. To see how the second proof goes, think of a permutation as just be ing a rearrangement of the n numbers (like a deck of cards). If we can ﬁnd a product of transpositions, that sends this rearrangement back to the trivial one, then we have shown that the inverse of the correspond ing permutation is a product of transpositions. Since a transposition is its own inverse, it follows that the original permutation is a product of transpositions (in fact the same product, but in the opposite order). In other words if τk · · · · · τ3 · τ2 · τ1 · σ = e, 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

then multiplying on the right by τi , in the opposite order, we get σ = τ1 · τ2 · τ3 · · · · · τk . The idea is to put back the cards into the right position, one at a time. Suppose that the ﬁrst i − 1 cards are in the right position. Suppose that the ith card is in position j. As the ﬁrst i − 1 cards are in the right position, j ≥ i. We may assume that j > i, otherwise there is nothing to do. Now look at the transposition (i, j). This puts the ith card into the right position. Thus we are done by induction on i. D

5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

6. Conjugation in Sn One thing that is very easy to understand in terms of Sn is conjuga tion. Deﬁnition 6.1. Let g and h be two elements of a group G. The element ghg −1 is called the conjugate of h by g. One reason why conjugation is so important, is because it measures how far the group G is from being abelian. Indeed if G were abelian, then gh = hg. Multiplying by g −1 on the right, we would have h = ghg −1 . Thus G is abelian iﬀ the conjugate of every element by any other element is the same element. Another reason why conjugation is so important, is that really con jugation is the same as translation. Lemma 6.2. Let σ and τ be two elements of Sn . Suppose that σ = (a1 , a2 , . . . , ak )(b1 , b2 , . . . , bl ) . . . is the cycle decomposition of σ. Then (τ (a1 ), τ (a2 ), . . . , τ (ak ))(τ (b1 ), τ (b2 ), . . . , τ (bl )) . . . is the cycle decomposition of τ στ −1 , the conjugate of σ by τ . Proof. Since both sides of the equation τ στ −1 = (τ (a1 ), τ (a2 ), . . . , τ (ak ))(τ (b1 ), τ (b2 ), . . . , τ (bl )) . . . are permutations, it suﬃces to check that both sides have the same eﬀect on any integer j from 1 to n. As τ is surjective, j = τ (i) for some i. By symmetry, we may as well assume that j = τ (a1 ). Then σ(a1 ) = a2 and the right hand side maps τ (a1 ) to τ (a2 ). But τ στ −1 (τ (a1 )) = τ σ(a1 ) = τ (a2 ). Thus the LHS and RHS have the same eﬀect on j and so they must be equal. D In other words, to ﬁnd compute the conjugate of σ by τ , just translate the elements of the cycle decomposition of σ. For example suppose σ = (3, 7, 4, 2)(1, 6, 5) 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

in S8 and τ is 1 2 3 4 5 6 7 8 . 3 2 5 1 8 7 6 4 Then the conjugate of σ by τ is τ στ −1 = (5, 6, 1, 2)(3, 7, 8). Now given any group G, conjugation deﬁnes an equivalence relation on G. Deﬁnition-Lemma 6.3. Let G be a group. We say that two elements a and b are conjugate, if there is a third element g ∈ G such that b = gag −1 . The corresponding relation, ∼, is an equivalence relation. Proof. We have to prove that ∼ is reﬂexive, symmetric and transitive. Suppose that a ∈ G. Then eae−1 = a so that a ∼ a. Thus ∼ is reﬂexive. Suppose that a ∈ G and b ∈ G and that a ∼ b, that is, a is conjugate to b. By deﬁnition this means that there is an element g ∈ G such that gag −1 = b. But then a = g −1 bg = hbh−1 , where h = g −1 . Thus b ∼ a and ∼ is reﬂexive. Finally suppose that a ∼ b and b ∼ c. Then there are elements g and h of G such that b = gag −1 and c = hbh−1 . Then c = hbh−1 = h(gag −1 )h−1 = (hg)a(hg)−1 = kak −1 ,

where k = gh. But then a ∼ c and ∼ is transitive. But then ∼ is an equivalence relation.

D

Deﬁnition 6.4. The equivalence classes of the equivalence relation above are called conjugacy classes. Given an aribtrary group G, it can be quite hard to determine the conjugacy classes of G. Here is the most that can be said in general. Lemma 6.5. Let G be a group. Then the conjugacy classes all have exactly one element iﬀ G is abelian. Proof. Easy exercise.

D

Proposition 6.6. The equivalence classes of the symmetric group Sn are precisely given by cycle type. That is, two permutations σ and σ ' are conjugate iﬀ they have the same cycle type. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. Suppose that σ and σ ' are conjugate. Then by (6.2) σ and σ ' have the same cycle type. Now suppose that σ and σ ' have the same cycle type. We want to ﬁnd a permutation τ that sends σ to σ ' . By assumption the cycles in σ and σ ' have the same lengths. Then we can pick a correspondence between the cycles of σ and the cycles of σ ' . Pick an integer j. Then j belongs to a cycle of σ. Look at the corresponding cycle in σ ' and look at the corresponding entry, call it j ' . Then τ should send j to j ' . D It is easy to check that then τ στ −1 = σ ' .

3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

7. Isomorphisms Look at the groups D3 and S3 . They are clearly the same group. Given a symmetry of a triangle, the natural thing to do is to look at the corresponding permutation of its vertices. On the other hand, it is not hard to show that every permutation in S3 can be realised as a symmetry of the triangle. It is very useful to have a more formal deﬁnition of what it means for two groups to be the same. Deﬁnition 7.1. Let G and H be two groups. We say that G and H are isomorphic if there is a bijective map φ : G −→ H, which respects the group structure. That is to say, for every g and h in G, φ(gh) = φ(g)φ(h). The map φ is called an isomorphism. In words, you can ﬁrst multiply in G and take the image in H, or you can take the images in H ﬁrst and multiply there, and you will get the same answer either way. With this deﬁnition of isomorphic, it is straightforward to check that D3 and S3 are isomorphic groups. Lemma 7.2. Let G and H be two cyclic groups of the same order. Then G and H are isomorphic. Proof. Let a be a generator of G and let b be a generator of H. Deﬁne a map φ : G −→ H as follows. Suppose that g ∈ G. Then g = ai for some i, then send g to g ' = bi . We ﬁrst have to check that this map is well-deﬁned. If G is inﬁnite, then so is H and every element of G may be uniquely represented in the form ai . Thus the map is automatically well-deﬁned in this case. Now suppose that G has order k, and suppose that g = aj . We have to check that bi = bj . As ai = aj , ai−j = e and k must divide i − j. In this case bi−j = e as the order of H is equal to k and so bi = bj . Thus φ is well-deﬁned. The map H −→ G i i deﬁned by sending b to a is clearly the inverse of φ. Thus φ is a bijection. Now suppose that g = ai and h = aj . Then gh = ai+j and the image of this element would be bi+j . 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

On the other hand, the image of ai is bi and the image of aj is bj and D the product of the images is bi bj = bi+j . Here is a far more non-trivial example. Lemma 7.3. The group of real numbers under addition and positive real numbers under multiplication are isomorphic. Proof. Let G be the group of real numbers under addition and let H be the group of real numbers under multiplication. Deﬁne a map φ : G −→ H x by the rule φ(x) = e . This map is a bijection, by the well-known results of calculus. We want to check that it is a group isomorphism. Suppose that x and y ∈ G. Then multiplying in G, we get x + y. Applying φ we get ex+y . On the other hand, applying φ directly we get ex and ey . Multiplying together we get ex ey = ex+y . D Deﬁnition 7.4. Let G be a group. An isomorphism of G with itself is called an automorphism. Deﬁnition-Lemma 7.5. Let G be a group and let a ∈ G be an element of G. Deﬁne a map φ : G −→ G by the rule φ(x) = axa−1 . Then φ is an automorphism of G. Proof. We ﬁrst check that φ is a bijection. Deﬁne a map ψ : G −→ G by the rule ψ(x) = a−1 xa. Then ψ(φ(x)) = ψ(axa−1 ) = a−1 (axa−1 )a = (a−1 a)x(a−1 a) = x. Thus the composition of φ and ψ is the identity. Similarly the compo sition of ψ and φ is the identity. In particular φ is a bijection. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Now we check that φ is an isomorphism. φ(x)φ(y) = (axa−1 )(aya−1 ) = a(xy)a−1 = φ(xy). Thus φ is an isomorphism.

D

There is a particularly simple and easy to understand example of these types of automorphisms. Let us go back to the case of D3 . Choosing a labelling of the vertices is somewhat arbitrary. A diﬀer ent choice of labelling, corresponds to a permutation of the numbers 1, 2 and 3. These will induce an automorphism of S3 , which is given by conjugation by the given permutation. Theorem 7.6. (Cayley’s Theorem) Let G be a group. Then G is isomorphic to a subgroup of a permutation group. If more over G is ﬁnite, then so is the permutation group, so that every ﬁnite group is a subgroup of Sn , for some n. Proof. Let H = A(G), the permutations of the set G. Deﬁne a map φ : G −→ H by the following rule. Given a ∈ G, send it to the permutation σ = φ(a), σ : G −→ G, deﬁned as follows σ(g) = ag, for any g ∈ G. Note that σ is indeed a permutation, that is, σ is a bijection. In fact the inverse of σ is the map that sends g to a−1 g. I claim that φ is an isomorphism onto its image. We ﬁrst check that φ is an injection. Suppose that a and b are two elements of G. Let σ and τ be the two corresponding elements of A(G). If σ = τ , then σ and τ must have the same eﬀect on elements of G. Look at their eﬀect on e, the identity, a = ae = σ(e) = τ (e) = be = b. Thus φ(a) = φ(b) implies a = b and φ is injective. Thus φ is cer tainly a bijection onto its image. Now we check that φ(ab) = φ(a)φ(b). Suppose that σ = φ(a) and τ = φ(b) and ρ = φ(ab). We want to check that ρ = στ . This is an equation that involves permutations, so it is 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

enough to check that both sides have the same eﬀect on elements of G. Let g ∈ G. Then σ(τ (g)) = σ(bg) = a(b(g)) = (ab)g = ρ(g). Thus φ is an isomorphism onto its image.

D

In practice Cayley’s Theorem is not in itself very useful. For example, if G = D3 then G is isomorphic to S3 . But if we were to apply the machinery behind Cayley’s Theorem, we would exhibit G as a subgroup of S6 , a group of order 6! = 720. One exception to this is the example of trying to construct a group G of order 4. We have already shown that there are at most two groups of order four, up to isomorphism. One is cyclic of order 4. The multiplication table of the other, if it is indeed a group, we decided was ∗

e a b c

e e a b c

a a e c b

b b c e a

c c b . a e

In fact the only thing left to show is that this rule of multiplication is associative. The idea is to ﬁnd a subgroup H of Sn , whose multiplication table is precisely the one given. The clue to ﬁnding H is given by Cayley’s Theorem. For a start Cayley’s Theorem shows that we should take n = 4. Now the four permutations of G determined by the multiplication table are e a b c e a b c e a b c e a b c . e a b c a e c b b c e a c b a e Replacing letters by numbers, in the obvious way, we get 1 2 3 4 1 2 3 4

1 2 3 4 2 1 4 3

1 2 3 4 3 4 1 2

1 2 3 4 . 4 3 2 1

This reduces to H = {e, (1, 2)(3, 4), (1, 3)(2, 4), (1, 4)(2, 3)}. Now it is easy to see that this subset is in fact a subgroup. In fact the square of any element is the identity and the product of any two 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

elements is the third. Thus H is a subgroup of S4 . Now H is a group of order 4, which is not cyclic. Thus there are at least two groups of order 4, up to isomorphism.

5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

8. Homomorphisms and kernels An isomorphism is a bijection which respects the group structure, that is, it does not matter whether we ﬁrst multiply and take the image or take the image and then multiply. This latter property is so important it is actually worth isolating: Deﬁnition 8.1. A map φ : G −→ H between two groups is a homor phism if for every g and h in G, φ(gh) = φ(g)φ(h). Here is an interesting example of a homomorphism. Deﬁne a map φ : G −→ H where G = Z and H = Z2 = Z/2Z is the standard group of order two, by the rule 0 if x is even φ(x) = 1 if x is odd. We check that φ is a homomorphism. Suppose that x and y are two integers. There are four cases. x and y are even, x is even, y is odd, x is odd, y is even, and x and y are both odd. Now if x and y are both even or both odd, then x + y is even. In this case φ(x + y) = 0. In the ﬁrst case φ(x) + φ(y) = 0 + 0 = 0 and in the second case φ(x) + φ(y) = 1 + 1 = 0. Otherwise one is even and the other is odd and x + y is odd. In this case φ(x + y) = 1 and φ(x) + φ(y) = 1 + 0 = 1. Thus we get a homomorphism. Here are some elementary properties of homomorphisms. Lemma 8.2. Let φ : G −→ H be a homomorphism. (1) φ(e) = f , that is, φ maps the identity in G to the identity in H. (2) φ(a−1 ) = φ(a)−1 , that is, φ maps inverses to inverses. (3) If K is subgroup of G, then φ(K) is a subgroup of H. Proof. Let a = φ(e), where e is the identity in G. Then a = φ(e) = φ(ee) = φ(e)φ(e) = aa. Thus a2 = a. Cancelling we get a = f , the identity in H. Hence (1). 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Let b = a−1 . f = φ(e) = φ(ab) = φ(a)φ(b), and f = φ(e) = φ(ba) = φ(b)φ(a). But then φ(b) is the inverse of φ(a), so that φ(a−1 ) = φ(a)−1 . Hence (2). Let X = φ(K). It suﬃces to check that X is non-empty and closed under products and inverses. X contains f the identity of H, by (1). X is closed under inverses by (2) and closed under products, almost by deﬁnition. Thus X is a subgroup. D Instead of looking at the image, it turns out to be much more inter esting to look at the inverse image of the identity. Deﬁnition-Lemma 8.3. Let φ : G −→ H be a group homomorphism. The kernel of φ, denoted Ker φ, is the inverse image of the identity. Then Ker φ is a subgroup of G. Proof. We have to show that the kernel is non-empty and closed under products and inverses. Note that φ(e) = f by (8.2). Thus Ker φ is certainly non-empty. Now suppose that a and b are in the kernel, so that φ(a) = φ(b) = f . φ(ab) = φ(a)φ(b) = ff = f. Thus ab ∈ Ker φ and so the kernel is closed under products. Finally suppose that φ(a) = e. Then φ(a−1 ) = φ(a)−1 = f , where we used (8.2). Thus the kernel is closed under inverses, and the kernel is a subgroup. D Here are some basic results about the kernel. Lemma 8.4. Let φ : G −→ H be a homomorphism. Then f is injective iﬀ Ker φ = {e}. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. If f is injective, then at most one element can be sent to the identity f ∈ H. Since φ(e) = f , it follows that Ker φ = {e}. Now suppose that Ker φ = {e} and suppose that φ(x) = φ(y). Let g = x−1 y. Then φ(g) = φ(x−1 y) = φ(x)−1 φ(y) = f. Thus g is in the kernel of φ and so g = e. But then x−1 y = e and so x = y. But then φ is injective. D It turns out that the kernel of a homomorphism enjoys a much more important property than just being a subgroup. Deﬁnition 8.5. Let G be a group and let H be a subgroup of G. We say that H is normal in G and write H < G, if for every g ∈ G, gHg −1 ⊂ H. Lemma 8.6. Let φ : G −→ H be a homomorphism. Then the kernel of φ is a normal subgroup of G. Proof. We have already seen that the kernel is a subgroup. Suppose that g ∈ G. We want to prove that g Ker φg −1 ⊂ Ker φ. Suppose that h ∈ Ker φ. We need to prove that ghg −1 ∈ Ker φ. Now φ(ghg −1 ) = φ(g)φ(h)φg −1 = φ(g)f φ(g)−1 = φ(g)φ(g)−1 = f. Thus ghg −1 ∈ Ker φ.

D

It is interesting to look at some examples of subgroups, to see which are normal and which are not. Lemma 8.7. Let G be an abelian group and let H be any subgroup. Then H is normal in G. Proof. Clear, as for every h ∈ H and g ∈ G, ghg −1 = h ∈ H.

D

3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

So let us look at the ﬁrst interesting example of a group which is not abelian. Take G = D3 . Let us ﬁrst look at H = {I, R, R2 }. Then H is normal in G. In fact, pick g ∈ D3 . If g belongs to H, there is nothing to prove. Otherwise g is a ﬂip. Let us suppose that it is F1 . Now pick h ∈ H and consider ghg −1 . If h = I then it is clear that ghg −1 = I ∈ H. So suppose that h = R. Then ghg −1 = F1 RF1 = R2 ∈ H. Similarly, if h = R2 , then ghg −1 = R ∈ H. Thus H is normal in G. Now suppose that H = {I, F1 }. Take h = F1 and g = R. Then ghg −1 = RF1 R2 = F2 . So gHg −1 = H. Lemma 8.8. Let H be a subgroup of a group G. TFAE (1) H is normal in G. (2) For every g ∈ G, gHg −1 = H. (3) Ha = aH, for every a ∈ G. (4) The set of left cosets is equal to the set of right cosets. (5) H is a union of conjugacy classes. Proof. Suppose that (1) holds. Suppose that g ∈ G. Then gHg −1 ⊂ H. Now replace g with g−1, then g −1 Hg ⊂ H, so that multiplying on the left by g −1 and the right by g H ⊂ gHg −1 . But then (2) holds. If (2) holds, then (3) holds, simply by multiplying the equality aHa−1 = H, on the right by a. If (3) holds, then (4) certainly holds. Suppose that (4) holds. Let g ∈ G. Then g ∈ gH and g ∈ Hg. If the set of left cosets is equal to the set of right cosets, then this means 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

gH = Hg. Now take this equality and multiply it on the right by g −1 . Then certainly gHg −1 ⊂ H, so that H is normal in G. Hence (1). Thus (1), (2), (3) and (4) are all equivalent. Suppose that (5) holds. Then H = ∪Ai , where Ai are conjugacy classes. Then gHg −1 = ∪gAi g −1 = ∪Ai = H. Thus H is normal. Finally suppose that (2) holds. Suppose that a ∈ H and that A is the conjugacy class to which a belongs. Pick b ∈ A. Then there is an element g ∈ G such that gag −1 = b. Then b ∈ gHg −1 = H. So A ⊂ H. But then H is a union of conjugacy classes. D Using this the example above becomes much easier with S3 . In the ﬁrst case we are looking at H = {e, (1, 2, 3), (1, 3, 2)}. In this case H is in fact a union of conjugacy classes. (Recall that the conjugacy classes of Sn are entirely determined by the cycle type). So H is obviously normal. Now take H = {e, (1, 2)}, and let g = (2, 3). Then gHg −1 = {geg −1 , g(1, 2)g −1 } = {e, (1, 3)}. Thus H is not normal in this case. Given this, we can give one more interesting example of a normal subgroup. Let G = S4 . Then let H = {e, (1, 2)(3, 4), (1, 3)(2, 4), (1, 4)(2, 3)}. We have already seen that H is a subgroup of G. On the other hand, H is a union of conjugacy classes. Indeed the three non-trivial elements of H represent the only permutations with cycle type (2, 2). Thus H is normal in G.

5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

9. Quotient Groups Given a group G and a subgroup H, under what circumstances can we ﬁnd a homomorphism φ : G −→ G' , such that H is the kernel of φ? Clearly a necessary condition is that H is normal in G. Somewhat surprisingly this trivially necessary condition is also in fact suﬃcient. The idea is as follows. Given G and H there is an obvious map of sets, where H is the inverse image of a point. We just put X to be the collection of left cosets of H in G. Then there is an obvious function φ : G −→ X. The map φ just does the obvious thing, it sends g to φ(g) = [g] = gH, the left coset corresponding to g. The real question is, can we make X into a group? Suppose that we are given two left cosets [a] = aH and [b] = bH. The obvious way to try to deﬁne a multiplication in X is to set (aH)(bH) = [a][b] = [ab] = (ab)H. Unfortunately there is a problem with this attempt to deﬁne a multi plication. The problem is that the multiplication map is not necessarily well-deﬁned. To give an illustrative example of the problems that arise deﬁning maps on equivalences classes by choosing representatives, consider the following example. Let Y be the set of all people and let ∼ be the equivalence relation such that x ∼ y iﬀ x and y have the same colour eyes. Then the equivalence classes are simply all possible colours of people’s eyes. We denote the set of all equivalence classes by Y / ∼. Consider trying to deﬁne a function, f : Y / ∼−→ R, on the equivalences classes to the real numbers. Given a colour, pick a person with that colour eyes, and send that colour to the height of that person. This is clearly absurd. Given any colour, there are lots of people with that colour eyes, and nearly everyone’s height will be diﬀerent, so we don’t get a well-deﬁned function this way. In fact the problem is that we might represent a left-coset in a dif ferent way. Suppose that a' H = aH and b' H = bH, so that [a' ] = [a] and [b' ] = [b]. Then we would also have another way to deﬁne the multiplication, that is (a' H)(b' H) = [a' ][b' ] = [a' b' ] = (a' b' )H. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

For the multiplication to be well-deﬁned, we need [a' b' ] = [ab]. In other words we need that a' b' ∈ abH. Now we do know that a' = ah and b' = bk for h and k ∈ H. It follows then that a ' b' = (ah)(bk). We want to manipulate the right hand side, until it is of the form abh' where h' ∈ H. Now in general it is absolutely guaranteed that this is going to fail. The point is, if this method did work, then there would be a homomorphism whose kernel is equal to H. So at the very least, we had better assume that H is normal in G. Now we would like to move the b through the h. As H is normal in G, we have b−1 Hb ⊂ H. In particular b−1 hb = l ∈ H. Thus hb = bl. It follows that a ' b' = (ah)(bk) = a(hb)k = a(bl)k = (ab)(lk) = (ab)h' , where h' = lk ∈ H. Thus, almost by a miracle, if H is normal in G, then the set of left cosets of H in G becomes a group. Theorem 9.1. Let G be a group and let H be a normal subgroup. Then the left cosets of H in G form a group, denoted G/H. G/H is called the quotient of G modulo H. The rule of multiplication in G is deﬁned as (aH)(bH) = abH. Furthermore there is a natural surjective homomorphism φ : G −→ G/H, deﬁned as φ(g) = gH. Moreover the kernel of φ is H. Proof. We have already checked that this rule of multiplication is welldeﬁned. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

We check the three axioms for a group. We ﬁrst check associativity. Suppose that a, b and c are in G. Then (aH)(bHcH) = (aH)(bcH) = (a(bc))H = ((ab)c)H = (ab)H(cH) = (aHbH)cH. Thus this rule of multiplication is associative. It is easy to see that eH = H plays the role of the identity. Indeed aHeH = aeH = aH = eHaH. Finally given a left coset aH, a−1 H is easily seen to be the inverse of aH. Thus G/H certainly does form a group. It is easy to see that φ is a surjective homomorphism. Finally the inverse image of the identity is equal to all those elements g of G such that gH = H. Almost by deﬁnition of an equivalence relation, it follows that g ∈ H, so that Ker φ = H. D At this point it is useful to introduce a little bit of category theory: Deﬁnition 9.2. A category C consists of two things. The ﬁrst is the objects of the category. The second is, given any two objects, X and Y , we associate a collection of morphisms, denoted Hom(X, Y ). Given three objects, X, Y and Z and two morphisms f ∈ Hom(X, Y ), g ∈ Hom(Y, Z), there is a rule of composition, g ◦ f ∈ Hom(X, Z). A category satisﬁes the following two axioms. (1) Composition is associative. That is, given f ∈ Hom(X, Y ), g ∈ Hom(Y, Z), h ∈ Hom(Z, W ), we have h ◦ (g ◦ f ) = (h ◦ g) ◦ f. (2) There is an element IX ∈ Hom(X, X) that acts as an identity, so that if f ∈ Hom(X, Y ) and g ∈ Hom(Y, X) we have, f ◦ IX = f, and IX ◦ g = g. Given f ∈ Hom(X, Y ) we say that g ∈ Hom(Y, X) is the inverse of f if f ◦ g = IY and g ◦ f = IX . In this case we say that f is an isomorphism and we say that X and Y are isomorphic. 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Given most advanced mathematics classes, there is normally a nat urally associated category. For us, the most natural category is the category of groups. The objects are the collection of all groups and the morphisms are homo morphisms. Composition of morphisms is composition of functions. It is not hard to see that all the axioms are satisﬁed. In fact the only thing we have not checked is that composition of homomorphisms is a homomorphism, which is an exercise for the reader. Two groups are isomorphic iﬀ they are isomorphic as objects of the category. Another very natural category is the category of sets. Here a mor phism is any function. Yet another category is the category of topological spaces. The ob jects are topological spaces and the morphisms are continuous maps. Two topological spaces are then homeomorphic iﬀ they are isomorphic as objects of this category. The category of metric spaces is a subcat egory of the category of topological spaces. The category of vector spaces, is the category whose objects are vector spaces and whose morphisms are linear maps. Deﬁnition 9.3. Let A

a

b

C

B c

d

D

be a collection of objects and morphisms belonging to a category. We say that the diagram commutes if the two morphisms from A to D are the same, that is c ◦ a = d ◦ b. In a category, the focus of interest is not the objects, but the mor phisms between the objects. In this sense, we would like to characterise the notion of the quotient group in a way that does not make explicit reference to the elements of G/H, but rather deﬁne everything in terms of homomorphisms between groups. Even though this is somewhat ab stract, there is an obvious advantage to this approach; as a set G/H is rather complicated, its elements are left cosets, which are themselves sets. But really we only need to know what G/H is up to isomorphism. Deﬁnition 9.4. Let G be a group and let H be a normal subgroup. The categorical quotient of G by H is a group Q together with a homomorphism u : G −→ Q, such that 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

• the kernel of u is H. which is universal amongst all such homomorphisms in the following sense. Suppose that φ : G −→ G' is any homomorphism such that the kernel of φ contains H. Then there is a unique induced homomorphism φ

- G' -

G u

f

?

Q, which makes the diagram commute. Note that in the deﬁnition of the categorical quotient, the most im portant part of the deﬁnition refers to the homomorphism u, and the universal property that it satisﬁes. Theorem 9.5. The category of groups admits categorical quotients. That is to say, given a group G and a normal subgroup H, there is a categorical quotient group Q. Furthermore, Q is unique, up to a unique isomorphism. Proof. We ﬁrst prove existence. Let G/H be the quotient group and let u : G −→ G/H be the natural homomorphism. I claim that this pair forms a categorical quotient. Thus we have to prove that u is universal. To this end, suppose that we are given a homomorphism φ : G −→ ' G , whose kernel contains H. Deﬁne a map f : G/H −→ G' by sending gH to φ(g). It is clear that the condition that the diagram commutes, forces this deﬁnition of f . We have to check that f is welldeﬁned. Suppose that g1 H = g2 H. We need to check that φ(g1 ) = φ(g2 ). As g1 H = g2 H, it follows that g2 = g1 h, for some h ∈ H. In this case φ(g2 ) = φ(g1 h) = φ(g1 )φ(h) = φ(g1 ), where we used the fact that h is in the kernel of φ. Thus the map f is well-deﬁned. Now we check that f is a homomorphism. Suppose that x and y are in G/H. Then x = g1 H and y = g2 H, for some g1 and g2 in G. In this 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

case

f (xy) = f (g1 g2 H) = φ(g1 g2 ) = φ(g1 )φ(g2 ) = f (g1 H)f (g2 H) = f (x)f (y). Thus f is a homomorphism. Finally we check that the diagram above commutes. Suppose that g ∈ G. Going along the top we get φ(g). Going down ﬁrst, we get gH and then going diagonally we get f (gH) = φ(g), by deﬁnition of f . It is clear that f is the only map which makes the diagram commute. Thus G/H is a categorical quotient. In particular categorical quo tients exist. Now we prove that categorical quotients are unique, up to unique iso morphism. Suppose that Q1 and Q2 are two such categorical quotients. As Q1 is a categorical quotient and there is a homomorphism u2 : G −→ Q2 it follows that there is an induced homomorphism f : Q1 −→ Q2 , which makes the following diagram commute u2

- Q2 -

G u1

f

?

Q1 , By symmetry there is a homomorphism g : Q2 −→ Q1 , which makes the same diagram commute, with 1 and 2 switched. Consider the composition f ◦ g : Q2 −→ Q2 . This is a homomorphism which make the following diagram commute G

u2

- Q2

u2

?

Q2 . However there is one homomorphism that makes the diagram com mute, namely the identity. By uniqueness, f ◦ g must be the identity. Similarly g ◦ f must be the identity. So f and g are inverses of each other, and hence isomorphisms. Note that f itself is unique, since its existence was given to us by the universal property of u1 . Thus the quotient is unique, up to a unique isomorphism. D

6

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

10. The isomorphism theorems We have already seen that given any group G and a normal subgroup H, there is a natural homomorphism φ : G −→ G/H, whose kernel is H. In fact we will see that this map is not only natural, it is in some sense the only such map. Theorem 10.1 (First Isomorphism Theorem). Let φ : G −→ G/ be a homomorphism of groups. Suppose that φ is onto and let H be the kernel of φ. Then G/ is isomorphic to G/H. Proof. By the universal property of a quotient, there is a natural ho morphism f : G/H −→ G/ . As f makes the following diagram commute, φ

G

u

G/

f

G/K,

it follows that f is surjective. It remains to prove that f is injective. Suppose that x is in the kernel of f . Then x has the form gH and by deﬁnition of f , f (x) = φ(g). Thus g is in the kernel of φ and so g ∈ H. In this case x = H, the identity of G/H. So the kernel of f is trivial and f is injective. Hence f is an isomorphism. D Deﬁnition 10.2. Let G be a group and let H and K be two subgroups of G. H ∨ K denotes the subgroup generated by the union of H and K. In general, it is hard to identify H ∨ K as a set. However, Theorem 10.3 (Second Isomorphism Theorem). Let G be a group, let H be a subgroup and let N be a normal subgroup. Then H ∨ N = HN = { hn | h ∈ H, n ∈ N }. Furthermore H ∩ N is a normal subgroup of H and the two groups H/H ∩ N and HN/N are isomorphic. Proof. The pairwise products of the elements of H and N are certainly elements of H ∨ N . Thus the RHS of the equality above is a subset of the LHS. The RHS is clearly non-empty and so it suﬃces to prove that the RHS is closed under products and inverses. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Suppose that x and y are elements of the RHS. Then x = h1 n1 and 1 y = h2 n2 , where hi ∈ H and ni ∈ N . Now h− 2 n1 h2 = n3 ∈ N , as N is normal in G. So n1 h2 = h2 n3 . In this case xy = (h1 n1 )(h2 n2 ) = (h1 (n1 h2 )n2 = (h1 (h2 n3 )n2 = (h1 h2 )(n3 n2 ), which shows that xy has the correct form. On the other hand, suppose x = hn. Then hnh−1 = m ∈ N as N is normal and so hn−1 h−1 = m−1 . In this case x−1 = n−1 h−1 = hm−1 , so that x−1 is of the correct form. Hence the ﬁrst statement. Let H −→ HN be the natural inclu sion. As N is normal in G, it is certainly normal in HN , so that we may compose the inclusion with the natural homomorphism to get a homomorphism H −→ HN/N. This map sends h to hN . Suppose that x ∈ HN/N . Then x = hnN = hN , where h ∈ H. Thus the homorphism above is clearly surjective. Suppose that h ∈ H belongs to the kernel. Then hN = N the identity coset, so that h ∈ N . Thus h ∈ H ∩ N . The result then follows by the First Isomorphism Theorem applied to the map above. D It is easy to prove the Third isomorphism Theorem from the First. Theorem 10.4 (Third Isomorphism Theorem). Let K ⊂ H be two normal subgroups of a group G. Then G/H r (G/K)/(H/K). Proof. Consider the natural map G −→ G/H. The kernel, H, contains K. Thus, by the universal property of G/K, it follows that there is a homomorphism G/K −→ G/H. This map is clearly surjective. In fact, it sends the left coset gK to the left coset gH. Now suppose that gK is in the kernel. Then the left coset gH is the identity coset, that is, gH = H, so that g ∈ H. Thus the kernel consists of those left cosets of the form gK, for g ∈ H, that is, H/K. The result now follows by the ﬁrst Isomorphism Theorem. D 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Recall that a subgroup is normal if it is invariant under conjugation. Now conjugation is just a special case of an automorphism of G. Deﬁnition 10.5. Let G be a group and let H be a subgroup. We say that H is a characteristic subgroup of G, if for every automorphism φ of G, φ(H) = H. It turns out that most of the general normal subgroups that we have deﬁned so far are all in fact characteristic subgroups. Lemma 10.6. Let G be a group and let Z = Z(G) be the centre. Then Z is characteristically normal. Proof. Let φ be an automorphism of G. We have to show φ(Z) = Z. Pick z ∈ Z. Then z commutes with every element of G. Pick an element x of G. As φ is a bijection, x = φ(y), for some y ∈ G. We have xφ(z) = φ(y)φ(z) = φ(yz) = φ(zy) = φ(z)φ(y) = φ(z)x. As x is arbitrary, it follows that φ(z) commutes with every element of G. But then φ(z) ∈ Z. Thus φ(Z) ⊂ Z. Applying the same result to the inverse ψ of φ we get φ−1 (Z) = ψ(Z) ⊂ Z. But then Z ⊂ φ(Z), so that Z = φ(Z). D Deﬁnition 10.7. Let G be a group and let x and y be two elements of G. x−1 y −1 xy is called the commutator of x and y. The commutator subgroup of G is the group generated by all of the commutators. Lemma 10.8. Let G be a group and let H be the commutator subgroup. Then H is characteristically normal in G and the quotient group G/H is abelian. Moreover this quotient is universal amongst all all abelian quotients in the following sense: Suppose that φ : G −→ G/ is any homomorphism of groups, where G/ is abelian. Then there is a unique homomorphism f : G/H −→ G/ such that f ◦ u = φ. Proof. Suppose that φ is an automorphism of G and let x and y be two elements of G. Then φ(x−1 y −1 xy) = φ(x)−1 φ(y)−1 φ(x)φ(y). 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

The last expression is clearly the commutator of φ(x) and φ(y). Thus φ(H) is generated by the commutators, and so φ(H) = H. Thus H is characteristically normal in G. Suppose that aH and bH are two left cosets. Then (bH)(aH) = baH = ba(a−1 b−1 ab)H = abH = (aH)(bH). Thus G/H is abelian. Suppose that φ : G −→ G/ is a homomorphism, and that G/ is abelian. By the universal property of a quotient, it suﬃces to prove that the kernel of φ must contain H. Since H is generated by the commutators, it suﬃces to prove that any commutator must lie in the kernel of φ. Suppose that x and y are in G. Then φ(x)φ(y) = φ(y)φ(x). It follows that φ(x)−1 φ(y)−1 φ(x)φ(y) is the identity in G/ so that x−1 y −1 xy is sent to the identity, that is, the commutator of x and y lies in the kernel of φ. D Deﬁnition-Lemma 10.9. Let G and H be any two groups. The product of G and H, denoted G×H, is the group, whose elements are the ordinary elements of the cartesian product of G and H as sets, with multiplication deﬁned as (g1 , h1 )(g2 , h2 ) = (g1 g2 , h1 h2 ). Proof. We need to check that with this law of multiplication, G × H becomes a group. This is left as an exercise for the reader. D Deﬁnition 10.10. Let C be a category and let X and Y be two objects of C. The categorical product of X and Y , denoted X × Y , is an object together with two morphisms p : X × Y −→ X and q : X × Y −→ Y that are universal amongst all such morphisms, in the following sense. Suppose that there are morphisms f : Z −→ X and g : Z −→ Y . Then there is a unique morphism Z −→ X × Y which makes the fol lowing diagram commute, X �

f

p

X × Y

Z

q

g

Y

4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Note that, by the universal property of a categorical product, in any category, the product is unique, up to unique isomorphism. The proof proceeds exactly as in the proof of the uniqueness of a categorical quotient and is left as an exercise for the reader. Lemma 10.11. The product of groups is a categorical product. That is, given two groups G and H, the group G × H deﬁned in (10.9) satisﬁees the universal property of (10.10). Proof. First of all note that the two ordinary projection maps p : G × H −→ G and q : G × H −→ H are both homomorphisms (easy exercise left for the reader). Suppose that we are given a group K and two homomorphisms f : K −→ G and g : K −→ H. We deﬁne a map u : K −→ G × H by sending k to (f (k), g(k)). It is left as an exercise for the reader to prove that this map is a homomorphism and that it is the only such map, for which the diagram commutes. D

5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

11. The Alternating Groups Consider the group S3 . Then this group contains a normal subgroup, generated by a 3-cycle. Now the elements of S3 come in three types. The identity, the prod uct of zero transpositions, the transpositions, the product of one trans position, and the three cycles, products of two transpositions. Then the normal subgroup above, consists of all permutations that can be represented as a product of an even number of transpositions. In general there is no canonical way to represent a permutation as a product of transpositions. But we might hope that the pattern above continues to hold in every permutation group. Deﬁnition 11.1. Let σ ∈ Sn be a permutation. We say that σ is even if it can be represented as a product of an even number of transpositions. We say that σ is odd if it can be represented as a product of an odd number of transpositions. The following result is much trickier to prove than it looks. Lemma 11.2. Let σ ∈ Sn be a permutation. Then σ is not both an even and an odd permutation. There is no entirely satisfactory proof of (11.2). Here is perhaps the simplest. Deﬁnition 11.3. Let x1 , x2 , . . . , xn be indeterminates and set (xi − xj ).

f (x1 , x2 , . . . , xn ) = i p, a contradiction. Thus nq = 1 and there is exactly one subgroup Q of order q. But then Q is normal in G. D We will also need the following Proposition, whose proof we omit. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proposition 13.7. If G is a p-group then the centre of G has order greater than one. In particular, ever simple p-group has prime order. Now consider the numbers from 1 to 60. Eliminating those that are prime, a power of a prime or the product of two primes, leaves the following cases. n = 12, 18, 20, 24, 28, 30, 34, 36, 40, 42, 45, 48, 50, 52, 54, 56, 58. We do some illustrative cases; the rest are left as an exercise for the reader. Pick n = 30 = 2 · 3 · 5. Let p = 5. How many Sylow 5-groups are there? Suppose that there are n5 . Then n5 is congruent to 1 modulo 5. In this case, n5 = 1, 6, . . . . On the other hand, n5 must divide 30, so that n5 = 1 or n5 = 6. If G is simple, then n5 = 1 and so n5 = 6. Let H and K be two Sylow 5-subgroups. Then |H| = |K| = 5. On the other hand H ∩ K is a subgroup of H and so by Lagrange, |H ∩ K| = 1. Since there are 6 Sylow 5-subgroups and each such group contains 4 elements of order 5 that are not contained in any other subgroup, it follows that there are 24 elements of order 5. Let n3 be the number of Sylow 3-subgroups. Then n3 is congruent to 1 modulo 3, so that n3 = 1, 4, 7, 10 . . . As n3 divides 30 and n3 = 1, it follows that n3 = 10. Arguing as above, there must therefore be 20 elements of order 3. But 24 + 20 > 30, impossible. Deﬁnition 13.8. Let G be a group and let S be a set. An action of G on S is a function G × S −→ S

denoted by

(g, s) −→ g · s,

such that e·s=s

(gh) · s = g · (h · s)

and

We have already seen lots of examples of actions. Dn acts on the vertices of a regular n-gon. Sn acts on the integers from 1 to n. Sn acts on the polynomials of degree n. G acts on itself by left multiplication. G acts on itself by conjugation, and so on. In fact, an action of G on a set S is equivalent to a group homomor phism (invariably called a representation) ρ : G −→ A(S). 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Given an action G × S −→ S, deﬁne a group homomorphism ρ : G −→ A(S)

by the rule

ρ(g) = σ : S −→ S,

where σ(s) = g · s. Vice-versa, given a representation (that is, a group homomorphism) ρ : G −→ A(S), deﬁne an action G · S −→ S

by the rule

g · s = ρ(g)(s).

It is left as an exercise for the reader to check all of the details. If G acts on itself by left multiplication the corresponding represen tation is precisely the group homomorphism which appears in Cayley’s theorem. In fact if H is any subgroup of G then G acts on the left cosets of H in G by left multiplication (exercise for the reader). Let us deal with one of the most tricky cases. Suppose that n = 48 = 24 · 3. We count the number of Sylow 2-subgroups. Anyone of these must have order 8. Suppose that there are n2 such groups. Then n2 is congruent to one modulo two. The possibilities are then n2 = 1, 3, 5, . . . . On the other hand n2 is supposed to divide 48, so that the only possibilities are 1 and 3. If n2 = 1 then we are done. Otherwise there must be 3 subgroups of order sixteen. Let S be the set of Sylow 2-subgroups. Deﬁne an action G × S −→ S, by the rule g · P = gP g −1 . By (13.3) given two elements of S, P1 and P2 , we can always ﬁnd g ∈ G such that g · P1 = gP1 g −1 = P2 . Let φ : G −→ A(S) r S3 , be the corresponding representation. Thus we send g ∈ G to the per mutation σ = φ(g), σ : S −→ S, −1 where σ(P ) = gP g . Consider the kernel of φ. By what we aleady observed, the kernel is not the whole of G. On the other hand, as G has order 48 and A(S) has order six, φ cannot be injective. Thus the kernel is a non-trivial normal subgroup. In fact all ﬁnite simple groups have been classiﬁed. Finite simple groups come in two classes. There are those that belong to inﬁnite series of well-understood examples. There are 15 of these series, two of 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

which are the cyclic groups of prime order and the alternating groups An , n ≥ 5. Then there are the sporadic groups. There are 26 sporadic groups. One such sporadic group is the monster group, which has order 246 · 320 · 59 · 76 · 112 · 133 · 17 · 19 · 23 · 29 · 31 · 41 · 47 · 59 · 71

which is

808017424794512875886459904961710757005754368000000000.

It is a subset of the group of rotations in a space of dimension

196, 883.

5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

14. Rings We introduce the main object of study for the second half of the course. Deﬁnition 14.1. A ring is a set R, together with two binary opera tions addition and multiplication, denoted + and · respectively, which satisfy the following axioms. Firstly R is an abelian group under addi tion, with zero as the identity. (1) (Associativity) For all a, b and c in R,

(a + b) + c = a + (b + c).

(2) (Zero) There is an element 0 ∈ R such that for all a in R, a + 0 = 0 + a. (3) (Additive Inverse) For all a in R, there exists b ∈ R such that a + b = b + a = 0.

b will be denoted −a.

(4) (Commutavity) For all a and b in R,

a + b = b + a.

Secondly multiplication is also associative and there is a multiplicative identity 1. (5) (Associativity) For all a, b and c in R,

(a · b) · c = a · (b · c).

(6) (Unit) There is an element 1 = 0 ∈ R such that for all a in R, a · 1 = 1 · a. Finally we require that addition and multiplication are compatible in an obvious sense. (7) (Distributivity) For all a, b and c in R, we have a · (b + c) = a · b + a · c, (b + c) · a = b · a + c · a. Unfortunately there is no standard deﬁnition of a ring. In particular some books do not require the existence of unity, or if they do require it, then they do not necessarily require that it is not equal to zero. Example 14.2. The complex numbers C form a ring, with the obvious multiplication and addition. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Deﬁnition 14.3. Let R be a ring and let S be a subset. We say that S is a subring of R, if S becomes a ring, with the induced addition and multiplication. Lemma 14.4. Let R be a ring and let S be a subset that contains 1. Then S is a subring iﬀ S is closed under addition, additive inverses and multiplication. Proof. Similar proof as for groups.

D

Note that we require S to contain 1. Since we don’t necessarily have multiplicative inverses, just because S is non-empty, does not force S to contain 1. Example 14.5. The following tower of subsets Z⊂Q⊂R⊂C is in fact a tower of subrings. A more interesting example is given by taking all rational numbers of the form a/b, where a and b are integers and b is odd. This set is a subring of the rational numbers. Indeed it contains 1 and it is easy to see that it is closed under addition and multiplication. Finally consider the Gaussian integers, deﬁned as all complex num bers of the form a + bi, where a and b are integers. It is easy to see that the Gaussian integers form a subring of the complex numbers. Example 14.6. Let Zn denote the left cosets of nZ inside Z, or what comes to the same thing, the integers modulo n. We showed that the law of addition and multiplication descends from Z down to Zn . With these rules of addition and multiplication, Zn becomes a ring. Indeed [0] plays the role of zero and [1] plays the role of the identity. In fact we proved that Zn is a group under addition and it is not much more work to prove that Zn is in fact a ring. Moreover we will see later that this is an example of a much more general phenomena. It is interesting to see what happens in a speciﬁc example. Suppose that n = 6. In this case 0 = [0] and 1 = [1]. However note that one curious feature is that [2][3] = [2 · 3] = [6] = [0], so that the product of two non-zero elements of R might in fact be zero.

2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Deﬁnition-Lemma 14.7. Let X be any set and let R be any ring. Then the set R of functions from X into R becomes a ring, with addi tion and multiplication deﬁned pointwise. That is to say, given f and g ∈ R, deﬁne f + g by the rule, (f + g)(x) = f (x) + g(x) ∈ R, where x ∈ X and addition is in R. Similarly deﬁne the product f · g of f and g by the rule, (f · g)(x) = f (x) · g(x) ∈ R. Then the zero function f , deﬁned by the rule f (x) = 0 ∈ R, for all x ∈ X, plays the role of zero and the function g, deﬁned by the rule g(x) = 1 ∈ R, plays the role of 1. Proof. Again, all of this is easy to check. We check associativity of addition and leave the rest to the reader. Suppose that f , g and h are three functions from X to R. We want to prove (f + g) + h = f + (g + h). Since both sides are functions from X to R, it suﬃces to prove that they have the same eﬀect on any element x ∈ X. ((f + g) + h)(x) = (f + g)(x) + h(x) = (f (x) + g(x)) + h(x) = f (x) + (g(x) + h(x)) = f (x) + (g + h)(x) = (f + (g + h))(x).

D

Here is a very interesting example of this type. Example 14.8. Let X = [0, 1] and R = R. Then we are looking at the collection of all functions from X into the reals. In this case there are lots of interesting subrings. For example consider C[0, 1], the set of all continuous functions from [0, 1] into R. Since the sum and product of two continuous functions is continuous, it follows that this is a subring of the set of all functions. Similarly we could look at the space of all diﬀerentiable (or twice, thrice, up to inﬁnitely diﬀerentiable) functions. 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Deﬁnition-Lemma 14.9. Let R be a ring and let n be a positive integer. Mn (R) denotes the set of all n × n matrices with entries in R. Given two such matrices A = (aij ) and B = (bij ), we deﬁne A + B as (aij + bij ). The product of A and B is also deﬁned in the usual way. That is the ij entry of AB is the dot product of the ith row of A and the jth column of B. With this rule of addition and multiplication Mn (R) becomes a ring, with zero given as the zero matrix (every entry equal to zero) and 1 given as the matrix with ones on the main diagonal and zeroes every where else. Proof. Most of this has already been proved and that which has not, is left as an exercise for the reader. D Note that is n = 1, then M1 (R) is simply a copy of R. To ﬁx ideas, let us consider an easy example. Example 14.10. Let R = Z6 be the ring of integers modulo six and take n = 2. Take 3 1 1 5 A= B= 2 4 1 2 Then 4 5 . 0 0

AB =

Deﬁnition-Lemma 14.11. Let R be a ring and let x be an indetermi nate. The polynomial ring R[x] is deﬁned to be the set of all formal sums an xn + an−1 xn + . . . a1 x + a0 = ai xi where each ai ∈ R. Given two polynomials f = an xn + an−1 xn−1 + . . . a1 x + a0 = g = bm xm + bm−1 xm−1 + . . . b1 x + b0 =

a i xi bi xi

in R[x] the sum of f and g, f + g, is deﬁned as, f +g = (an +bn )xn +(an−1 +bn−1 )xn−1 +. . . (a1 +b1 )x+(a0 +b0 ) =

(ai +bi )xi ,

(where we have implicitly assumed that m ≤ n and we set bi = 0, for i > m) and the product as f g = cm+n xm+n +cm+n−1 xm+n−1 +. . . c1 x1 +c0 =

c i xi = i

aj bi−j )xi .

( i

j

With this rule of addition and multiplication, R[x] becomes a ring, with zero given as the polynomial with zero coeﬃcients and 1 given as the 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

polynomial whose constant coeﬃcient is one and whose other terms are zero. Proof. A long and completely uninformative check.

D

Note that a polynomial, determines a function R −→ R in an obvious way. If one takes R to be the real numbers, then it is well known that a polynomial is determined by the corresponding function. In general, however, this is far from true. For example take R = Z2 (the smallest ring possible, since a ring must contain at least two elements). Then there are four functions from R to R and there are inﬁnitely many polynomials. Thus two diﬀerent polynomials will often determine the same function. Example 14.12. The ﬁnal example is a famous and beautiful gener alisation of the complex numbers. The complex numbers are obtained by adding a formal number i to the real numbers and decreeing that i2 = −1. The quaternions are obtained from the real numbers by adding three new numbers, i, j and k. Thus the set of all quaternions is equal to the set of all formal sums a + bi + cj + dk, where a, b, c and d are real numbers. It is obvious how to deﬁne addition, (a+bi+cj+dk)+(a' +b' i+c' j+d' k) = (a+a' )+(b+b' )i+(c+c' )j+(d+d' )k. Multiplication is a little more complicated. The basic idea is to deﬁne how to multiply any two of i, j and k and from there extend by using the associative and distributive laws. Thus we deﬁne i2 = j 2 = k 2 = −1, ij = k, jk = i, ki = j, ji = −k, kj = −i, ik = −j. In this case, we deﬁne the multiplication as, (a + bi + cj + dk)(a' + b' i + c' j + d' k) = (aa' − bb' − cc' − dd' )

+(ab' +b' a+cd' −dc' )i+(ac' +ca' +db' −bd' )j +(ad' +da' +bc' −b' c)k. Again it is not so hard to check that this does gives us a group. If one look at the real numbers, then the numbers ±1 form a group under multiplication, isomorphic to Z2 . Similarly the complex numbers ±1, ±i form a group under multiplication, isomorphic to Z4 . It is in fact not hard to see that the quaternion numbers, ±1, ±i, ±j and ±k form a group of order eight under multiplication (if you like, think of the multiplication rule above as giving generators and relations). 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

15. Basic Properties of Rings

We ﬁrst prove some standard results about rings.

Lemma 15.1. Let R be a ring and let a and b be elements of R. Then (1) a0 = 0a = 0. (2) a(−b) = (−a)b = −(ab). Proof. Let x = a0. We have x = a0 = a(0 + 0) = a0 + a0 = x + x. Adding −x to both sides, we get x = 0, which is (1). Let y = a(−b). We want to show that y is the additive inverse of ab, that is we want to show that y + ab = 0. We have y + ab = a(−b) + ab = a(−b + b) = a0 = 0, by (1). Hence (2).

D

Lemma 15.2. Let R be a set that satisﬁes all the axioms of a ring, except possibly a + b = b + a. Then R is a ring. Proof. It suﬃces to prove that addition is commutative. We compute (a + b)(1 + 1), in two diﬀerent ways. Distributing on the right, (a + b)(1 + 1) = (a + b)1 + (a + b)1 =a+b+a+b = a + (b + a) + b. On the other hand, distributing this product on the left we get (a + b)(1 + 1) = a(1 + 1) + b(1 + 1) = a + a + b + b. Thus a + (b + a) + a = (a + b)(1 + 1) = a + a + b + b. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Cancelling an a on the left and a b on the right, we get b + a = a + b, which is what we want.

D

Note the following identity. Lemma 15.3. Let R be a ring and let a and b be any two elements of R. Then (a + b)2 = a2 + ab + ba + b2 . Proof. Easy application of the distributive laws.

D

Deﬁnition 15.4. Let R be a ring. We say that R is commutative if multiplication is commutative, that is a · b = b · a. Note that most of the rings introduced in the the ﬁrst section are not commutative. Nevertheless it turns out that there are many interest ing commutative rings. Compare this with the study of groups, when abelian groups are not considered very interesting. Deﬁnition-Lemma 15.5. Let R be a ring. We say that R is boolean if for every a ∈ R, a2 = a. Every boolean ring is commutative. Proof. We compute (a + b)2 . a + b = (a + b)2 = a2 + ba + ab + b2 = a + ba + ab + b. Cancelling we get ab = −ba. If we take b = 1, then a = −a, so that −(ba) = (−b)a = ba. Thus ab = ba. D Deﬁnition 15.6. Let R be a ring. We say that R is a division ring if R − {0} is a group under multiplication. If in addition R is commu tative, we say that R is a ﬁeld. Note that a ring is a division ring iﬀ every non-zero element has a multiplicative inverse. Similarly for commutative rings and ﬁelds. Example 15.7. The following tower of subsets Q⊂R⊂C is in fact a tower of subﬁelds. Note that Z is not a ﬁeld however, as 2 does not have a multiplicative inverse. Further the subring of Q given 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

by those rational numbers with odd denominator is not a ﬁeld either. Again 2 does not have a multiplicative inverse. Lemma 15.8. The quaternions are a division ring. Proof. It suﬃces to prove that every non-zero number has a multiplica tive inverse. Let q = a + bi + cj + dk be a quaternion. Let q¯ = a − bi − cj − dk, the conjugate of q. Note that qq¯ = a2 + b2 + c2 + d2 . As a, b, c and d are real numbers, this product if non-zero iﬀ q is non-zero. Thus q¯ p= 2 , 2 a + b + c2 + d2 is the multiplicative inverse of q. D Here is an obvious necessary condition for division rings: Deﬁnition-Lemma 15.9. Let R be a ring. We say that a ∈ R, a = 0, is a zero-divisor if there is an element b ∈ R, b = 0, such that, either, ab = 0

or

ba = 0.

If a has a multiplicative inverse in R then a is not a zero divisor. Proof. Suppose that ba = 0 and that c is the multiplicative inverse of a. We compute bac, in two diﬀerent ways. bac = (ba)c = 0c = 0. On the other hand bac = b(ac) = b1 = b. Thus b = bac = 0. Thus a is not a zero divisor.

D

Deﬁnition-Lemma 15.10. Let R be a ring. We say that R is a domain if R has no zero-divisors. If in addition R is commutative, then we say that R is an integral domain. Every division ring is a domain. Unfortunately the converse is not true. 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Example 15.11. Z is an integral domain but not a ﬁeld. In fact any subring of a division ring is clearly a domain. Many of the examples of rings that we have given are in fact not domains. Example 15.12. Let X be a set with more than one element and let R be any ring. Then the set of functions from X to R is not a domain. Indeed pick any partition of X into two parts, X1 and X2 (that is suppose that X1 and X2 are disjoint, both non-empty and that their union is the whole of X). Deﬁne f : X −→ R, by x ∈ X1 x ∈ X2 ,

f (x) =

0 1

g(x) =

1 x ∈ X1 0 x ∈ X2 .

and g : X −→ R, by

Then f g = 0, but neither f not g is zero. Thus f is a zero-divisor. Now let R be any ring, and suppose that n > 1. I claim that Mn (R) is not a domain. We will do this in the case n = 2. The general is not much harder, just more involved notationally. Set A=B=

0 1 . 0 0

Then it is easy to see that 0 0 . 0 0

AB =

Note that the deﬁnition of an integral domain involves a double negative. In other words, R is an integral domain iﬀ whenever ab = 0, where a and b are elements of R, then either a = 0 or b = 0.

4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

16. Ring Homomorphisms and Ideals

Deﬁnition 16.1. Let φ : R −→ S be a function between two rings. We say that φ is a ring homomorphism if for every a and b ∈ R, φ(a + b) = φ(a) + φ(b) φ(a · b) = φ(a) · φ(b), and in addition φ(1) = 1. Note that this gives us a category, the category of rings. The objects are rings and the morphisms are ring homomorphisms. Just as in the case of groups, one can deﬁne automorphisms. Example 16.2. Let φ : C −→ C be the map that sends a complex number to its complex conjugate. Then φ is an automorphism of C. In fact φ is its own inverse. Let φ : R[x] −→ R[x] be the map that sends f (x) to f (x + 1). Then φ is an automorphism. Indeed the inverse map sends f (x) to f (x − 1). By analogy with groups, we have Deﬁnition 16.3. Let φ : R −→ S be a ring homomorphism. The kernel of φ, denoted Ker φ, is the inverse image of zero. As in the case of groups, a very natural question arises. What can we say about the kernel of a ring homomorphism? Since a ring homo morphism is automatically a group homomorphism, it follows that the kernel is a normal subgroup. However since a ring is an abelian group under addition, in fact all subgroups are automatically normal. Deﬁnition-Lemma 16.4. Let R be a ring and let I be a subset of R. We say that I is an ideal of R and write I < R if I is a an additive subgroup of R and for every a ∈ I and r ∈ R, we have ra ∈ I

and

ar ∈ I.

Let φ : R −→ S be a ring homorphism and let I be the kernel of φ. Then I is an ideal of R. Proof. We have already seen that I is an additive subgroup of R. Suppose that a ∈ I and r ∈ R. Then φ(ra) = φ(r)φ(a) = φ(r)0 = 0. Thus ra is in the kernel of φ. Similarly for ar.

D

1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

As before, given an additive subgroup H of R, we let R/H denote the group of left cosets of H in R. Proposition 16.5. Let R be a ring and let I be an ideal of R, such that I = R. Then R/I is a ring. Furthermore there is a natural ring homomor phism u : R −→ R/I which sends r to r + I. Proof. As I is an ideal, and addition in R is commutative, it follows that R/I is a group, with the natural deﬁnition of addition inherited from R. Further we have seen that φ is a group homomorphism. It remains to deﬁne a multiplication in R/I. Given two left cosets r+I and s+I in R/I, we deﬁne a multiplication in the obvious way, (r + I)(s + I) = rs + I. In fact this is forced by requiring that u is a ring homorphism. As before the problem is to check that this is well-deﬁned. Suppose that r' + I = r + I and s' + I = s + I. Then we may ﬁnd i and j in I such that r' = r + i and s' = s + j. We have r' s' = (r + i)(s + j) = rs + is + rj + ij. As I is an ideal, is + rj + ij ∈ I. It follows that r' s' + I = rs + I and multiplication is well-deﬁned. The rest is easy to check. D As before the quotient of a ring by an ideal is a categorical quotient. Theorem 16.6. Let R be a ring and I an ideal not equal to all of R. Let u : R −→ R/I be the obvious map. Then u is universal amongst all ring homomorphisms whose kernel contains I. That is, suppose φ : R −→ S is any ring homomorphism, whose kernel contains I. Then there is a unique ring homomomorphism ψ : R/I −→ S, which makes the following diagram commute, φ

R

u

ψ

/S /

R/I.

Theorem 16.7. (Isomorphism Theorem) Let φ : R −→ S be a homo morphism of rings. Suppose that φ is onto and let I be the kernel of φ. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Then S is isomorphic to R/I. Example 16.8. Let R = Z. Fix a non-zero integer n and let I consist of all multiples of n. It is easy to see that I is an ideal of Z. The quotient, Z/I is Zn the ring of integers modulo n. Deﬁnition-Lemma 16.9. Let R be a commutative ring and let a ∈ R be an element of R. The set I = (a) = { ra | r ∈ R }, is an ideal and any ideal of this form is called principal. Proof. We ﬁrst show that I is an additive subgroup. Suppose that x and y are in I. Then x = ra and y = sa, where r and s are two elements of R. In this case x + y = ra + sa = (r + s)a. Thus I is closed under addition. Further −x = −ra = (−r)a, so that I is closed under inverses. It follows that I is an additive subgroup. Now suppose that x ∈ I and that s ∈ R. Then sx = s(ra) = (sr)a ∈ I. It follows that I is an ideal.

D

Deﬁnition-Lemma 16.10. Let R be a ring. We say that u ∈ R is a unit, if u has a multiplicative inverse. Let I be an ideal of a ring R. If I contains a unit, then I = R. Proof. Suppose that u ∈ I is a unit of R. Then vu = 1, for some v ∈ R. It follows that 1 = vu ∈ I. Pick a ∈ R. Then a = a · 1 ∈ I. D Proposition 16.11. Let R be a division ring. Then the only ideals of R are the zero ideal and the whole of R. In particular if φ : R −→ S is any ring homomorphism then φ is injective. Proof. Let I be an ideal, not equal to {0}. Pick u ∈ I, u = 0. As R is a division ring, it follows that u is a unit. But then I = R. Now let φ : R −→ S be a ring homomorphism and let I be the kernel. Then I cannot be the whole of R, so that I = {0}. But then φ is injective. D 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Example 16.12. Let X be a set and let R be a ring. Let F denote the set of functions from X to R. We have already seen that F forms a ring, under pointwise addition and multiplication. Let Y be a subset of X and let I be the set of those functions from X to R whose restriction to Y is zero. Then I is an ideal of F . Indeed I is clearly non-empty as the zero function is an element of I. Given two functions f and g in F , whose restriction to Y is zero, then clearly the restriction of f + g to Y is zero. Finally, suppose that f ∈ I, so that f is zero on Y and suppose that g is any function from X to R. Then gf is zero on Y . Thus I is an ideal. Now consider F/I. I claim that this is isomorphic to the space of functions G from Y to R. Indeed there is a natural map from F to G which sends a function to its restriction to Y , f −→ f |Y . It is clear that the kernel is I. Thus the result follows by the Isomor phism Theorem. As a special case, one can take X = [0, 1] and R = R. Let Y = {1/2}. Then the space of maps from Y to R is just a copy of R. Example 16.13. Let R be the ring of Gaussian integers, that is, those complex numbers of the form a + bi. Let I be the subset of R consisting of those numbers such 2|a and 2|b. I claim that I is an ideal of R. In fact suppose that a + bi ∈ I and c + di ∈ I. Then (a + bi) + (c + di) = (a + c) + (b + d)i. As a and c are even, then so is a + c and similarly as b and d are even, then so is b + d. Thus I is closed under addition. Similarly I is closed under inverses. Now suppose that a + bi ∈ I and r = c + di is a Gaussian integer. Then (c + di)(a + bi) = (ac − bd) + (ad + bc)i. As a and b are even, so are ac − bd and ad + bc and so I is an ideal.

4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

17. Field of fractions The rational numbers Q are constructed from the integers Z by adding inverses. In fact a rational number is of the form a/b, where a and b are integers. Note that a rational number does not have a unique representative in this way. In fact a ka = . b kb So really a rational number is an equivalence class of pairs [a, b], where two such pairs [a, b] and [c, d] are equivalent iﬀ ad = bc. Now given an arbitrary integral domain R, we can perform the same operation. Deﬁnition-Lemma 17.1. Let R be any integral domain. Let N be the subset of R × R such that the second coordinate is non-zero. Deﬁne an equivalence relation ∼ on N as follows. (a, b) ∼ (c, d)

iﬀ

ad = bc.

Proof. We have to check three things, reﬂexivity, symmetry and tran sitivity. Suppose that (a, b) ∈ N . Then a·b=a·b so that (a, b) ∼ (a, b). Hence ∼ is reﬂexive. Now suppose that (a, b), (c, d) ∈ N and that (a, b) ∼ (c, d). Then ad = bc. But then cb = da, as R is commutative, and so (c, d) = (a, b). Hence ∼ is symmetric. Finally suppose that (a, b), (c, d) and (e, f ) ∈ R and that (a, b) ∼ (c, d), (c, d) ∼ (e, f ). Then ad = bc and cf = de. Then (af )d = (ad)f = (bc)f = b(cf ) = (be)d. As (c, d) ∈ N , we have d = 0. Cancelling d, we get af = be. Thus (a, b) ∼ (e, f ). Hence ∼ is transitive. D Deﬁnition-Lemma 17.2. The ﬁeld of fractions of R, denoted F is the set of equivalence classes, under the equivalence relation deﬁned above. Given two elements [a, b] and [c, d] deﬁne [a, b] + [c, d] = [ad + bc, bd]

and

[a, b] · [c, d] = [ab, cd].

1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

With these rules of addition and multiplication F becomes a ﬁeld. More over there is a natural injective ring homomorphism φ : R −→ F, so that we may identify R as a subring of F . In fact φ is universal amongs all such injective ring homomorphisms whose targets are ﬁelds. Proof. First we have to check that this rule of addition and multiplica tion is well-deﬁned. Suppose that [a, b] = [a' , b' ] and [c, d] = [c' , d' ]. By commutativity and an obvious induction (involving at most two steps, the only real advantage of which is to simplify the notation) we may assume c = c' and d = d' . As [a, b] = [a' , b' ] we have ab' = a' b. Thus (a' d + b' c)(bd) = a' bd2 + bb' cd = ab' d2 + bb' cd = (ad + bc)(b' d). Thus [a' d + b' c, b' d] = [ad + bc, bd]. Thus the given rule of addition is well-deﬁned. It can be shown similarly (and in fact more easily) that the given rule for multiplication is also well-deﬁned. We leave it is an exercise for the reader to check that F is a ring under addition and that multiplication is associative. For example, note that [0, 1] plays the role of 0 and [1, 1] plays the role of 1. Given an element [a, b] in F , where a = 0, then it is easy to see that [b, a] is the inverse of [a, b]. It follows that F is a ﬁeld. Deﬁne a map φ : R −→ F, by the rule φ(a) = [a, 1]. Again it is easy to check that φ is indeed an injective ring homomor phism and that it satisﬁes the given universal property. D Example 17.3. If we take R = Z, then of course the ﬁeld of fractions is isomorphic to Q. If R is the ring of Gaussian integers, then F is a copy of a + bi where now a and b are elements of Q. If R = K[x], where K is a ﬁeld, then the ﬁeld of fractions is denoted K(x). It consists of all rational functions, that is all quotients f (x) , g(x) where f and g are polynomials with coeﬃcients in K.

2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

18. Prime and Maximal Ideals Let R be a ring and let I be an ideal of R, where I = R. Consider the quotient ring R/I. Two very natural questions arise: (1) When is R/I a domain? (2) When is R/I a ﬁeld? Deﬁnition-Lemma 18.1. Let R be a ring and let I be an ideal of R. We say that I is prime if whenever ab ∈ I then either a ∈ I or b ∈ I. Then R/I is a domain if and only if I is prime. Proof. Suppose that I is prime. Let x and y be two elements of R/I. Then there are elements a and b of R such that x = a+I and y = b+I. Suppose that xy = 0, but that x = 0, that is, suppose that a ∈ / I. xy = (a + I)(b + I) = ab + I = 0. But then ab ∈ I and as I is prime, b ∈ I. But then y = b + I = I = 0. Thus R/I is an domain. Now suppose that R/I is a domain. Let a and b be two elements of R such that ab ∈ I and suppose that a ∈ / I. Let x = a + I, y = b + I. Then xy = ab + I = 0. As x = 0, and R/I is an domain, y = 0. But then b ∈ I and so I is prime. D Example 18.2. Let R = Z. Then every ideal in R has the form (n) = nZ. It is not hard to see that I is prime iﬀ n is prime. Deﬁnition 18.3. Let R be an integral domain and let a be a non-zero element of R. We say that a is prime, if (a) is a prime ideal, not equal to the whole of R. Note that the condition that (a) is not the whole of R is equivalent to requiring that a is not a unit. Deﬁnition 18.4. Let R be a ring. Then there is a unique ring homo morphism φ : Z −→ R. We say that the characteristic of R is n if the order of the image of φ is ﬁnite, equal to n; otherwise the characteristic is 0. Let R be a domain of ﬁnite characteristic. Then the characteristic is prime. Proof. Let φ : Z −→ R be a ring homomorphism. Then φ(1) = 1. Note that Z is a cyclic group under addition. Thus there is a unique map that 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

sends 1 to 1 and is a group homomorphism. Thus φ is certainly unique and it is not hard to check that in fact φ is a ring homomorphism. Now suppose that R is an integral domain. Then the image of φ is an integral domain. In particular the kernel I of φ is a prime ideal. Suppose that I = (p). Then the image of φ is isomorphic to R/I and so the characteristic is equal to p. D Another, obviously equivalent, way to deﬁne the characteristic n is to take the minimum non-zero positive integer such that n1 = 0. Example 18.5. The characteristic of Q is zero. Indeed the natural map Z −→ Q is an inclusion. Thus every ﬁeld that contains Q has characteristic zero. On the other hand Zp is a ﬁeld of characteristic p. Deﬁnition 18.6. Let I be an ideal. We say that I is maximal if for every ideal J, such that I ⊂ J, either J = I or J = R. Proposition 18.7. Let R be a commutative ring. Then R is a ﬁeld iﬀ the only ideals are {0} and R. Proof. We have already seen that if R is a ﬁeld, then R contains no non-trivial ideals. Now suppose that R contains no non-trivial ideals and let a ∈ R. Suppose that a = 0 and let I = (a). Then I = {0}. Thus I = R. But then 1 ∈ I and so 1 = ba. Thus a is a unit and as a was arbitrary, R is a ﬁeld. D Theorem 18.8. Let R be a commutative ring. Then R/M is a ﬁeld iﬀ M is a maximal ideal. Proof. Note that there is an obvious correspondence between the ideals of R/M and ideals of R that contain M . The result therefore follows immediately from (18.7). D Corollary 18.9. Let R be a commutative ring. Then every maximal ideal is prime. Proof. Clear as every ﬁeld is an integral domain.

D

Example 18.10. Let R = Z and let p be a prime. Then I = (p) is not only prime, but it is in fact maximal. Indeed the quotient is Zp . Example 18.11. Let X be a set and let R be a commutative ring and let F be the set of all functions from X to R. Let x ∈ X be a point of X and let I be the ideal of all functions va0nishing at x. Then F/I is isomorphic to R. Thus I is prime iﬀ R is an integral domain and I is maximal iﬀ R is a ﬁeld. For example, take X = [0, 1] and R = R. In this case it 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

turns out that every maximal ideal is of the same form (that is, the set of functions vanishing at a point). Example 18.12. Let R be the ring of Gaussian integers and let I be the ideal of all Gaussian integers a + bi where both a and b are divisible by 3. I claim that I is maximal. I will give two ways to prove this. Method I: Suppose that I ⊂ J ⊂ R is an ideal, not equal to I. Then there is an element a + bi ∈ J, where 3 does not divide one of a or b. It follows that 3 does not divide a2 + b2 . But c = a2 + b2 = (a + bi)(a − bi) ∈ J, as a + bi ∈ J and J is an ideal. As 3 does not divide c, we may ﬁnd integers r and s such that 3r + cs = 1. As c ∈ J, cs ∈ J and as 3 ∈ I ⊂ J, 3r ∈ J as well. But then 1 ∈ J and J = R. Method II: Suppose that (a + bi)(c + di) ∈ I. Then 3|(ac − bd)

and

3|(ad + bc).

Suppose that a + bi ∈ / I. Adding the two results above we have 3|(a + b)c + (a − b)d. Now either 3 divides a and it does not divide b, or vice-versa, or the same is true, with a + b replacing a and a − b replacing b, as can be seen by an easy case-by-case analysis. Suppose that 3 divides a whilst 3 does not divide b. Then 3|bd and so 3|d as 3 is prime. Similarly 3|c. It follows that c + di ∈ I. Similar analyses pertain in the other cases. Thus I is prime, so that the quotient R/I is an integral domain. As the quotient is ﬁnite (easy check) it follows that the quotient is a ﬁeld, so that I is maximal. It turns out that R/I is a ﬁeld with nine elements. Now suppose that we replace 3 by 5 and look at the resulting ideal J. I claim that J is not maximal. Indeed consider x = 2 + i and y = 2 − i. Then xy = (2 + i)(2 − i) = 4 + 1 = 5, so that xy ∈ J, whilst neither x nor y are in J, so that J is not even prime.

3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

19. Special Domains Let R be an integral domain. Recall that an element a = 0, of R is said to be prime, if the corresponding principal ideal (p) is prime and a is not a unit. Deﬁnition 19.1. Let a and b be two elements of an integral domain. We say that a divides b and write a|b if there is an element q such that b = qa. We say that a and b are associates if a divides b and b divides a. Example 19.2. Let R = Z. Then 2|6. Indeed 6 = 3 · 2. Moreover 3 and −3 are associates. Let R be an integral domain. Note some obvious facts. Every ele ment a of R divides 0. Indeed 0 = 0 · a. On the other hand, 0 only divides 0. Indeed if a = q · 0, then a = 0 (obvious!). Finally every unit u divides any other element a. Indeed if v ∈ R is the inverse of u, so that uv = 1 then a = a · 1 = (av)u. Lemma 19.3. Let R be an integral domain and let p ∈ R. Then p is prime if and only if p is not a unit and whenever p divides ab then either p divides a or p divides b, where a and b are elements of R. Proof. Suppose that p is prime and p divides ab. Let I = (p). Then ab ∈ I. As p is prime, then I is prime by deﬁnition. Thus either a ∈ I or b ∈ I. But then either p|a or p|b. Thus if p is prime and p|ab then either p|a or p|b. The reverse implication is just as easy. D Lemma 19.4. Let R be an integral domain and let a and b be two non-zero elements of R. TFAE (1) a and b are associates. (2) a = ub for some unit u. (3) (a) = (b). Proof. Suppose that a and b are associates. As a divides b, b = qa and b divides a, a = rb for some q and r in R. Thus b = qa = (qr)b. As R is an integral domain, and b = 0, we can cancel b, to get qr = 1. But then u = q is a unit. Thus (1) implies (2). Suppose that a = qb and that c ∈ (a). Then c = ra = (rq)b. Thus c ∈ (b) and (a) ⊂ (b). Now suppose that a = ub, where u is a unit. Let 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

v be the inverse of u, so that b = va. By what we have already proved, (b) ⊂ (a). Thus (2) implies (3). Finally suppose that (a) = (b). As a ∈ (a), it follows that a ∈ (b), so that a = rb, for some r ∈ R. Thus b divides a. By symmetry a divides b and so a and b are associates. Thus (3) implies (1). D Deﬁnition 19.5. Let R be an integral domain. We say that R is a unique factorisation domain (abbreviated to UFD) if every non-zero element a of R, which is not a unit, has a factorisation into a product of primes, p 1 p2 p3 · · · p k , which is unique up to order and associates. The last statement is equivalent to saying that if we can ﬁnd two factorisations of a, p 1 p2 p3 · · · p k = q 1 q 2 q3 · · · q l . where pi and qj are prime, then k = l, and up to re-ordering of q1 , q2 , . . . , ql , pi and qi are associates. Example 19.6. Of course, by metic, Z is a UFD. In this case nary primes and their inverses. prime factorisation of 120. One

the Fundamental Theorem of Arith the prime elements of Z are the ordi For example, suppose we look at the possibility, the standard one, is

23 · 3 · 5. However another possibility is −5 · 3 · (−2)3 . The point is that in an arbitrary ring there is no standard choice of as sociate. On the other hand, every non-zero integer has two associates, and it is customary to favour the positive one. Consider the problem of starting with a ring R and proving that R is a UFD. Obviously this consists of two steps. The ﬁrst is to start with an element a of R and express it as a product of primes. We call this existence. The next step is to prove that this factorisation is unique. We call this uniqueness. Let us consider the ﬁrst step, that is, existence of a factorisation. How do we write any integer as a product of primes? Well there is an obvious way to proceed. Try to factorise the integer. If you can, then work with both factors and if you cannot then you already have a prime. Unfortunately this approach hides one nasty subtelty. 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Deﬁnition 19.7. Let R be a ring and let a ∈ R be an element of R. We say that a is irreducible if whenever a = bc, then either b or c is a unit. Equivalently, a is irreducible if and only if whenever b divides a, then b is either a unit or an associate of a. Clearly every prime element a of an integral domain R is automatically irreducible. The subtelty that arises is that in an arbitrary integral domain there are irreducible elements that are not prime. On the other hand, unless the ring is very pathological indeed, it is quite easy to prove that every non-zero element of a ring is a product of irreducibles, in fact using the method outline above. The only issue is that the natural process outlined above terminates in a ﬁnite number of steps. Before we go into this deeper, we need a basic deﬁnition, concerning partially ordered sets. Deﬁnition 19.8. Let X be a set. A partial order on X is a reﬂexive and transitive relation on X × X . It is customary to denote a partial order ≤. The fact that ≤ is reﬂexive is equivalent to x ≤ x and the fact the ≤ is transitive is equivalent to a≤b

and

b≤c

implies

a ≤ c.

We also require that if x ≤ y and y ≤ x then x = y. We say that X satisﬁes the ascending chain condition (ACC) if every inﬁnite increasing chain x1 ≤ x2 ≤ x3 ≤ · · · ≤ xn ≤ · · · eventually stabilises, that is, there is an n0 such that xn = xm for every n and m at least n0 . Note that, in the deﬁnition of a partial order, we do not require that every two elements of X are comparable. In fact if every pair of elements are comparable, that is, for every x and y ∈ X, either x ≤ y of y ≤ x, then we say that our partial order is a total order. There is a similar notion for descending chains, knows as the de scending chain condition, or DCC for short. Example 19.9. Every ﬁnite set with a partial order satisﬁes the ACC and the DCC for obvious reasons. Let X be a subset of the real numbers with the obvious relation. Then X is a partially ordered set. The set 1 1 1 1 X = { | n ∈ N } = {1, , , , . . . }, n 2 3 4 satisﬁes the ACC but it clearly does not satisfy the DCC. 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Let Y be a set and let X be a subset of the power set of Y , so that X is a collection of subsets of Y . Deﬁne a relation ≤ by the rule, A≤B

if and only if

A ⊂ B.

In the case that X is the whole power set of Y , note that ≤ is not a total order, provided that Y has at least two elements a and b, since in this case A = {a} and B = {b} are incomparable. Factorisation Algorithm: Let R be an integral domain and let a be a non-zero element of R that is not a unit. Consider the following algo rithm, that produces a, possibly inﬁnite, pair of sequences of elements a1 , a2 , . . . and b1 , b2 , . . . of R, where ai = ai+1 bi+1 and neither ai nor bi is a unit. Suppose that we have already produced a1 , a2 , . . . , ak and b1 , b2 , . . . , bk . (1) If ak and bk are both irreducible then stop. (2) Otherwise, possibly switching ak and bk we may assume that ak is not irreducible. Thus we may write ak = ak+1 bk+1 , where neither ak+1 nor bk+1 are units. Proposition 19.10. Let R be an integral domain. TFAE (1) The factorisation algorithm above terminates, starting with any non-zero element a of the ring R and pursuing all possible ways of factorising a. In particular, every non-zero element a of R is either a unit or a product of irreducibles. (2) The set of principal ideals satisﬁes the ACC. That is, every increasing chain (a1 ) ⊂ (a2 ) ⊂ (a3 ) ⊂ · · · ⊂ (an ) ⊂ · · · eventually stabilises. Proof. Suppose we have a strictly increasing sequence of principal ideals as in (2). We will ﬁnd an a such that the factorisation algorithm does not terminate. Note that a principal ideal (a) = R if and only if a is a unit. As the sequence of ideals in (2) is increasing, then no ideal can be the whole of R. Thus none of the ai are units. As ai ∈ (ai+1 ), we may ﬁnd bi+1 such that ai = bi+1 ai+1 . But bi+1 cannot be a unit as (ai ) 6= (ai+1 ). Thus the factorisation algorithm, with a = a1 does not terminate. Thus (1) implies (2). The reverse implication follows similarly. D Lemma 19.11. Let R be a ring and let I1 ⊂ I2 ⊂ I3 ⊂ · · · ⊂ In ⊂ · · · , 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

be an ascending sequence of ideals. Then the union I of these ideals, is an ideal. Proof. We have to show that I is non-empty and closed under addition and multiplication by any element of R. I is clearly non-empty. For example it contains I1 , which is nonempty. Suppose that a and b belong to I. Then there are two natural numbers m and n such that a ∈ Im and b ∈ In . Let k be the maximum of m and n. Then a and b are elements of Ik , as Im and In are subsets of Ik . It follows that a + b ∈ Ik , as Ik is an ideal and so a + b ∈ I. Similarly −a ∈ I. Finally suppose that a ∈ I and r ∈ R. Then a ∈ In , D for some n. In this case ra ∈ In ⊂ I. Thus I is an ideal. Deﬁnition 19.12. Let R be a integral domain. We say that R is a principal ideal domain, abbreviated to PID, if every ideal I in R is principal. Lemma 19.13. Let R be a principal ideal domain. Then every ascending chain of ideals stabilises. In particular every non-zero element a of R, which is not a unit, has a factorisation p 1 p 2 p3 · · · p k , into irreducible elements of R. Proof. Suppose we have an ascending chain of ideals as in (2) of (19.10). Let I be the union of these ideals. By (19.11) I is an ideal of R. As R is assumed to be a PID, I is principal, so that I = (b), for some b ∈ R. Thus b ∈ (an ), for some n. In this case b = qan , for some q. But then (b) ⊂ (an ). As we have an increasing sequence of ideals, it follows that in fact (ak ) = (b), for all m ≥ n, that is the sequence of ideals stabilises. Now apply (19.10). D Thus we have ﬁnished the ﬁrst step of our program. Given an integral domain R, we have found suﬃcient conditions for the factorisation of any element a, that is neither zero nor a unit, into irreducible elements. Now we turn to the other problem, the question of uniqueness. Lemma 19.14. Let R be an integral domain and suppose that p divides q, where both p and q are primes. Then p and q are associates. Proof. By assumption q = ap, for some a ∈ R. As q is prime, either q divides a or q divides p. If q divides p then p and q are associates. 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Otherwise q divides a. In this case a = qb, and so q = ap = (pb)q. Cancelling, we have that p is a unit, absurd.

D

Lemma 19.15. Let R be an integral domain and let a and b be two non-zero elements of R, both of which are not units. Suppose that a = p1 , p2 , . . . , pk and b = q1 , q2 , . . . , ql is a factorisation of a and b into primes. Then a divides b, if and only if k ≤ l and after re-ordering the qj , we have that pi and qi are associates, for i ≤ k. In particular there is at most one prime factorisation of every non zero element a of R, up to associates and re-ordering. Proof. We prove the ﬁrst statement. One direction is clear. Otherwise suppose a divides b. As p1 divides a and a divides b, p1 divides b. As p1 is prime and it divides a product, it most divide one of the factors qi . Possibly re-ordering, we may assume that i = 1. By (19.14) p1 and q1 are associates. Cancelling p1 from both sides and absorbing the resulting unit into q2 , we are done by induction on k. Now suppose that a has two diﬀerent prime factorisations, p1 p2 · · · pk

and

q1 q2 · · · ql .

As a|a, it follows that k ≤ l and that pi and qi are associates. Using a|a again, but now the other way around, we get l ≤ k. Thus we have uniqueness of prime factorisation. D Putting all this together, we have Proposition 19.16. Let R be an integral domain, in which every as cending chain of principal ideals stabilises. Then R is a UFD if and only if every irreducible element of R, which is neither zero nor a unit, is prime. Deﬁnition 19.17. Let R be an integral domain. Let a and b be two elements of R. We say that d is the greatest common divisor of a and b if (1) d|a and d|b, (2) if d' |a and d' |b then d' |d. Note that the gcd is not unique. In fact if d is a gcd, then so is d' if and only if d and d' are associates. Lemma 19.18. Let R be a UFD. Then every pair of elements has a gcd. 6

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. Let a and b be a pair of elements of R. If either a or b is zero, then it is easy to see that the other element is the gcd. If either element is a unit then in fact the gcd is 1 (or in fact any unit). So we may assume that neither a nor b are zero or units. Let a = p1 , p2 , . . . , pk and b = q1 , q2 , . . . , ql be two prime factorisations of a and b. Note that we may put both factorisations into a more standard form, mk 1 m2 m3 a = upm 1 p 2 p 3 · · · pk

and

vp1n1 p2n2 p3n3 · · · pknk ,

where u and v are units, and pi and pj are associates if and only if i = j. In this case it is clear, using (19.15), that the gcd is d = pl11 pl22 pl33 · · · plkk , D where li is the minimum of mi and ni . Lemma 19.19. Let R be a ring, let Ii be a collection of ideals in R and let I be their intersection. Then I is an ideal. Proof. Easy exercise left to the reader.

D

Deﬁnition-Lemma 19.20. Let R be a ring and let S be a subset of R. The ideal generated by S, denoted (S), is the smallest ideal containing S. Proof. Let Ii be the collection of all ideals that contain S. Then the intersection I of these ideals, is an ideal by (19.19) and this is clearly the smallest ideal that contains S. D Lemma 19.21. Let R be a ring and let S be subset of R. Then the ideal generated by S consists of all ﬁnite combinations r1 a1 + r2 a2 + · · · + rk ak , where r1 , r2 , . . . , rk ∈ R and a1 , a2 , . . . , ak ∈ S. Proof. It is clear that any ideal that contains S must contain all ele ments of this form, since any ideal is closed under addition and multi plication by elements of R. On the other hand, it is an easy exercise to check that these combinations do form an ideal. D Lemma 19.22. Let R be a PID. Then every pair of elements a and b has a gcd d, such that d = ra + sb, where r and s ∈ R. Proof. Consider the ideal I generated by a and b, (a, b). As R is a PID, I = (d). As d ∈ I, d = ra + sb, for some r and s in R. As a ∈ I = (d), d divides a. Similarly d divides b. Suppose that d' divides a and d' divides b. Then (a, b) ⊂ (d' ). But then d|d' . D 7

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Theorem 19.23. Let R be a PID. Then R is a UFD. Proof. We have already seen that the set of principal ideals satisﬁes the ACC. It remains to prove that irreducible implies prime. Let a be an irreducible element of R. Let b and c be any two elements of R and suppose that a divides the product bc. Then bc ∈ (a). Let d be the gcd of a and b. Then d divides a. As a is irreducible, there are only two possibilities; either d is an associate of a or d is a unit. Suppose that d is an associate of a. As d divides b, then a divides b and we are done. Otherwise d is a unit, which we may take to be 1. In this case, by (19.22), we may ﬁnd r and s such that 1 = ra + sb. Multiplying by c, we have c = rac + sbc = (rc + qs)a, so that a divides c. Thus a is prime and R is a UFD.

D

8

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

20. Euclidean Domains Let R be an integral domain. We want to ﬁnd natural conditions on R such that R is a PID. Looking at the case of the integers, it is clear that the key property is the division algorithm. Deﬁnition 20.1. Let R be an integral domain. We say that R is Euclidean, if there is a function d : R − {0} −→ N ∪ {0}, which satisﬁes, for every pair of non-zero elements a and b of R, (1) d(a) ≤ d(ab). (2) There are elements q and r of R such that

b = aq + r,

where either r = 0 or d(r) < d(a).

Example 20.2. The ring Z is a Euclidean domain. The function d is

the absolute value.

Deﬁnition 20.3. Let R be a ring and let f ∈ R[x] be a polynomial

with coeﬃcients in R. The degree of f is the largest n such that the

coeﬃcient of xn is non-zero.

Lemma 20.4. Let R be an integral domain and let f and g be two

elements of R[x].

Then the degree of f g is the sum of the degrees of f and g. In particular R[x] is an integral domain. Proof. Suppose that f = am xm +am−1 xm +· · ·+a1 x+a0

and

g = bn xn +bn−1 xn +· · ·+b1 x+b0 .

Then f g = (am bn )xm+n + cm+n−1 xm+n−1 + · · · + c1 x + c0 . As R is an integral domain, am bn = 0 and so f g has degree n + m. D Deﬁnition-Lemma 20.5. Let k be a ﬁeld and let R = k[x] be the polynomial ring. Deﬁne a function d : R − {0} −→ N ∪ {0} by sending f to its degree. Then R is a Euclidean domain. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. The ﬁrst property of d follows from (20.4). We prove the second property. Suppose that we are given two poly nomials f and g. We want to divide f into g. We proceed by induction on the degree of g. If the degree of g is less than the degree of f , there is nothing to prove, take q = 0 and r = g. Suppose the result holds for all degrees less than the degree of g. We may suppose that g = bxn + g1

and

f = axm + f1 ,

where f1 and g1 are of degree less than m and n. Put q0 = cxn−m , where c = b/a. Let h = g − q0 f . Then h has degree less than g. By induction then, h = q1 f + r, where r has degree less than f . It follows that g = h + q0 f = (q0 + q1 )f + r = qf + r, where q = q0 + q1 .

D

Deﬁnition-Lemma 20.6. Let R = Z[i] be the ring of Gaussian inte gers. Deﬁne a function d : R − {0} −→ N ∪ {0}, by sending a + bi to its norm, which is by deﬁnition a2 + b2 . Then the ring of Gaussian integers is a Euclidean domain. Proof. Note ﬁrst that if z is a complex number, then the absolute value of z, deﬁned as the square root of the product of z with its complex conjugate z¯, is closely related to the norm of z. In fact if z is a Gaussian integer x + iy, then |z|2 = zz¯ = x2 + y 2 = d(z). On the other hand, suppose we use polar coordinates, rather than Cartesian coordinates, to represent a complex number, z = reiθ . Then r = |z|. For any pair z1 and z2 of complex numbers, we have |z1 z2 | = |z1 ||z2 |. Indeed this is clear, if we use polar coordinates. Now suppose that both z1 and z2 are Gaussian integers. If we square both sides of the equation above, we get d(z1 z2 ) = d(z1 )d(z2 ). 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

As the absolute value of a Gaussian integer is always at least one, (1) follows easily. To prove (2), it helps to think about this problem geometrically. First note that one may think of the Gaussian integers as being all points in the plane with integer coordinates. Fix a Gaussian integer α. To obtain all multiples of α = reiθ , that is, the principal ideal (α), it suﬃces to take this lattice, rotate it through an angle of θ and stretch it by an amount r. With this picture, it is clear that given any other Gaussian integer β, there is a multiple of α, call it qα, such that the square of the distance between β and qα is at most r2 /2. Indeed let γ = β/α. Pick a Gaussian integer q such that the square of the distance between γ and q is at most 1/2. Then the distance between β = γα and qα is at most r2 /2. Thus we may write β = qα + r, (diﬀerent r of course) such that d(r) < d(α).

D

It might help to see a simple example of how this works in practice. Suppose that we take a = 1 + i and b = 4 − 5i. The ﬁrst step is to construct b c= . a

Now aa ¯ = 12 + 12 = 2, so that the inverse of a is

a ¯ 1 − i

= . 2 2

Multiplying by b we get

ab ¯

c= 2 1 = ((1 − i)(4 − 5i)) 2 1 = − (1 + 9i) 2 1 9 = − − i. 2 2 Now we pick a Gaussian integer that is no more than a distance of 1 from c. For example −4i will do (indeed any one of −1 − 5i, −5i, −4i and −1 − 4i will work). This is our quotient q. The error at this point is 1 i s = c − q = −( + ). 2 2 Now multiplying both sides by a, we get r = sa = b − qa, 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

so that

b = qa + r. Thus

1 r = − (1 + i)2 = i. 2

Clearly d(r) = 1 = 2d(s) < d(a) = 2, as required. Lemma 20.7. Every Euclidean domain is a PID. In particular every Euclidean domain is a UFD. Proof. Let I be an ideal in a Euclidean domain. We want to show that I is principal. If I = {0} then I = (0). Pick a an element of I, such that d(a) is minimal. I claim that I = (a). Suppose that b ∈ I. We may write b = aq + r. 6 0 then d(r) < d(a), which contradicts our choice If r = b−aq ∈ I. If r = of a. Otherwise r = 0 and b ∈ (a) so that I = (a) is principal. D Corollary 20.8. The Gaussian integers and the polynomials over any ﬁeld are a UFD. Of course, one reason why the division algorithm is so interesting, is that it furnishes a method to construct the gcd of two natural numbers a and b, using Euclid’s algorithm. Clearly the same method works in an arbitrary Euclidean domain.

4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coeﬃcient ring is a ﬁeld. We already know that such a polynomial ring is a UFD. Therefore to determine the prime elements, it suﬃces to determine the irreducible elements. We start with some basic facts about polynomial rings. Lemma 21.1. Let R be an integral domain. Then the units in R[x] are precisely the units in R. Proof. One direction is clear. A unit in R is a unit in R[x]. Now suppose that f (x) is a unit in R[x]. Given a polynomial g, denote by d(g) the degree of g(x) (note that we are not claiming that R[x] is a Euclidean domain). Now f (x)g(x) = 1. Thus 0 = d(1) = d(f g) ≥ d(f ) + d(g). Thus both of f and g must have degree zero. It follows that f (x) = f0 and that f0 is a unit in R[x]. D Lemma 21.2. Let R be a ring. The natural inclusion R −→ R[x] which just sends an element r ∈ R to the constant polynomial r, is a ring homomorphism. Proof. Easy.

D

The following universal property of polynomial rings, is very useful. Lemma 21.3. Let φ : R −→ S be any ring homomorphism and let s ∈ S be any element of S. Then there is a unique ring homomorphism ψ : R[x] −→ S, such that φ(x) = s and which makes the following diagram commute φ

R

S

/

/

f

ψ

R[x] 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. Note that any ring homomorphism ψ : R[x] −→ S that sends x to s and acts as φ on the coeﬃcients, must send an xn + an−1 xn−1 + · · · + a0 to φ(an )sn + φ(an−1 )sn−1 + · · · + φ(a0 ). Thus it suﬃces to check that the given map is a ring homomorphism, which is left as an exercise to the reader. D Deﬁnition 21.4. Let R be a ring and let α be an element of R. The natural ring homomorphism φ : R[x] −→ R, which acts as the identity on R and which sends x to α, is called eval uation at α and is often denoted evα . We say that α is a zero (aka root) of f (x), if f (x) is in the kernel of evα . Lemma 21.5. Let K be a ﬁeld and let α be an element of K. Then the kernel of evα is the ideal (x − α). Proof. Denote by I the kernel of evα . Clearly x−α is in I. On the other hand, K[x] is a Euclidean domain, and so it is certainly a PID. Thus I is principal. Suppose it is generated by f , so that I = (f ). Then f divides x − α. If f has degree one, then x − α must be an associate of f and the result follows. If f has degree zero, then it must be a constant. As f has a root at α, in fact this constant must be zero, a contradiction. D Lemma 21.6. Let K be a ﬁeld and let f (x) be a polynomial in K[x]. Then we can write f (x) = g(x)h(x) where g(x) is a linear polynomial if and only if f (x) has a root in K. Proof. First note that a linear polynomial always has a root in K. Indeed any linear polynomial is of the form ax + b, where a = 0. Then it is easy to see that α = − ab is a root of ax + b. On the other hand, the kernel of the evaluation map is an ideal, so that if g(x) has a root α, then in fact so does f (x) = g(x)h(x). Thus if we can write f (x) = g(x)h(x), where g(x) is linear, then it follows that f (x) must have a root. Now suppose that f (x) has a root at α. Consider the linear polyno mial g(x) = x − α. Then the kernel of evα is equal to (x − α). As f is in the kernel, f (x) = g(x)h(x), for some h(x) ∈ R[x]. D 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Lemma 21.7. Let K be a ﬁeld and let f (x) be a polynomial of degree two or three. Then f (x) is irreducible if and only if it has no roots in K. Proof. If f (x) has a root in K, then f (x) = g(x)h(x), where g(x) has degree one, by (21.6). As the degree of f is at least two, it follows that h(x) has degree at least one. Thus f (x) is not irreducible. Now suppose that f (x) is not irreducible. Then f (x) = g(x)h(x), where neither g nor h is a unit. Thus both g and h have degree at least one. As the sum of the degrees of g and h is at most three, the degree of f , it follows that one of g and h has degree one. Now apply (21.6). D Deﬁnition 21.8. Let p be a prime. Fp denotes the unique ﬁeld with p elements. Of course, Fp is isomorphic to Zp . However, as we will see later, it is useful to replace Z by F . Example 21.9. First consider the polynomial x2 + 1. Over the real numbers this is irreducible. Indeed, if we replace x by any real number a, then a2 is non-negative and so a2 + 1 cannot equal zero. On the other hand ±i is a root of x2 + 1, as i2 + 1 = 0. Thus x2 + 1 is reducible over the complex numbers. Indeed x2 +1 = (x+i)(x−i). Thus an irreducible polynomial might well become reducible over a larger ﬁeld. Consider the polynomial x2 + x + 1. We consider this over various ﬁelds. As observed in (21.7) this is reducible iﬀ it has a root in the given ﬁeld. Suppose we work over the ﬁeld F5 . We need to check if the ﬁve elements of F5 are roots or not. We have 12 + 1 + 1 = 3

22 + 2 + 1 = 2

32 + 3 + 13

42 + 4 = 1

Thus x2 + x + 1 is irreducible over F5 . Now consider what happens over the ﬁeld with three elements F3 . Then 1 is a root of this polynomial. As neither 0 nor 2 are roots, we must have x2 + x + 1 = (x − 1)2 = (x + 2)2 , which is easy to check. Now let us determine all irreducible polynomials of degree at most four over F2 . Any linear polynomial is irreducible. There are two such x and x + 1. A general quadratic has the form f (x) = x2 + ax + b. 6 0, else x divides f (x). Thus b = 1. If a = 0, then f (x) = x2 + 1, b= which has 1 as a zero. Thus f (x) = x2 + x + 1 is the only irreducible quadratic. 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Now suppose that we have an irreducible cubic f (x) = x3 +ax+bx+1. This is irreducible iﬀ f (1) 6= 0, which is the same as to say that there are an odd number of terms. Thus the irreducible cubics are f (x) = x3 + x2 + 1 and x3 + x + 1. Finally suppose that f (x) is a quartic polynomial. The general irre ducible is of the form x4 + ax3 + bx2 + cx + 1. f (1) 6= 0 is the same as to say that either two of a, b and c is equal to zero or they are all equal to one. Suppose that f (x) = g(x)h(x). If f (x) does not have a root, then both g and h must have degree two. If either g or h were reducible, then again f would have a linear factor, and therefore a root. Thus the only possibilty is that both g and h are the unique irreducible quadratic polynomials. In this case f (x) = (x2 + x + 1)2 = x4 + x2 + 1. Thus x4 + x3 + x2 + x + 1, x4 + x3 + 1, and x4 + x + 1 are the three irreducible quartics. Obviously it would be nice to have some more general methods of proving that a given polynomial is irreducible. The ﬁrst is rather beau tiful and due to Gauss. The basic idea is a follows. Suppose we are given a polynomial with integer coeﬃcients. Then it is natural to also consider this polynomial over the rationals. Note that it is much easier to prove that this polynomial is irreducible over the integers than it is to prove that it is irreducible over the rationals. For example it is clear that x2 − 2 is irreducible over √the integers. In fact it is irreducible over the rationals as well, that is, 2 is not a rational number. First some deﬁnitions. Deﬁnition 21.10. Let R be a commutative ring and let a1 , a2 , . . . , ak be a sequence of elements of R. The gcd of a1 , a2 , . . . , ak is an element d ∈ R such that (1) d|ai , for all 1 ≤ i ≤ k. (2) If d' |ai , for all 1 ≤ i ≤ k, then d' |d. Lemma 21.11. Let R be a UFD. Then the gcd of any sequence a1 , a2 , . . . , ak of non-zero elements of R exists. 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. There are two obvious ways to proceed. The ﬁrst is to take a common factorisation of each ai into a product of powers of primes, as in the case k = 2. The second is to recursively construct the gcd, by setting di to be the gcd of di−1 and ai and taking d1 = a1 . In this case d = dk will be D a gcd for the whole sequence a1 , a2 , . . . , ak . Deﬁnition 21.12. Let R be a UFD and let f (x) be a polynomial with coeﬃcients in R. The content of f (x), denoted c(f ), is the gcd of the coeﬃcients of f. Example 21.13. Let f (x) = 24x3 − 60x + 40. Then the content of f is 4. Thus f (x) = 4(8x3 − 15x + 10). Lemma 21.14. Let R be a UFD. Then every element of R[x] has a factorisation of the form cf, where c ∈ R and the content of f is one. Proof. Obvious.

D

Here is the key result. Proposition 21.15. Let R be a UFD. Suppose that g and h ∈ R[x] and let f (x) = g(x)h(x). Then the content of f is equal to the content of g times the content of h. Proof. It is clear that the content of g divides the content of f . There fore we may assume that the content of g and h is one, and we only have to prove that the same is true for f . Suppose not. As R is a UFD, it follows that there is a prime p that divides the content of f . We may write g(x) = an xn +an−1 xn−1 +· · ·+a0

and h(x) = bn xn +bn−1 xn−1 +· · ·+b0 .

As the content of g is one, at least one coeﬃcient of g is not divisible by p. Let i be the ﬁrst such, so that p divides ak , for k < i whilst p does not divide ai . Similarly pick j so that p divides bk , for k < j, whilst p does not divide bj . Consider the coeﬃcient of xi+j in f . This is equal to a0 bi+j + a1 bi+j−1 + · · · + ai−1 bj+1 + ai bj + ai+1 bj+1 + · · · + ai+j b0 . Note that p divides every term of this sum, except the middle one ai bj . Thus p does not divide the coeﬃcient of xi+j . But this contradicts the deﬁnition of the content. D 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Theorem 21.16. (Gauss’ Lemma) Let R be a UFD and let f (x) ∈ R[x]. Let F be the ﬁeld of fractions of R. Suppose that the content of f is one and that we may write f (x) = u1 (x)v1 (x), where u1 (x) and v1 (x) are in F [x]. Then we may ﬁnd u(x) and v(x) in R[x], such that f (x) = u(x)v(x) where the degree of u is the same as the degree of u1 and the degree of v is the same as the degree of v. In particular if f is irreducible in R[x] then it is irreducible in F [x]. Proof. We have f (x) = u1 (x)v1 (x). Now clear denominators. That is, multiply through by the product c of all the denominators in u1 (x) and v1 (x). In this way we get an expression of the form cf (x) = u2 (x)v2 (x), where now u2 and v2 belong to R[x]. Now write u2 (x) = au(x)

and

v2 (x) = bv(x).

We get cf (x) = abu(x)v(x). By (21.15) we can c divides ab, ab = cα, where α ∈ R. Therefore, replacing u(x) by αu(x), we have f (x) = u(x)v(x).

D

Corollary 21.17. Let R be a UFD. Then R[x] is a UFD. Proof. It is clear that the Factorisation algorithm terminates, by in duction on the degree. Therefore it suﬃces to prove that irreducible implies prime. Suppose that f (x) ∈ R[x] is irreducible. If f has degree zero, then it is an irreducible element of R and hence a prime element of R and there is nothing to prove. Otherwise we may assume that the content of f is one. By Gauss’ Lemma, f is not only irreducible in R[x] but also in F [x]. But then f is a prime element of F [x] as F [x] is a UFD. Now suppose that f divides gh, where g and h ∈ R[x]. As f is prime in F [x], f divides g or h in F [x]. Suppose it divides g. Then we may write g = f k, 6

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

some k ∈ F [x]. As in the proof of Gauss’ Lemma, this means we may write g = f k', some k ' ∈ R[x]. But then f (x) divides g in R[x]. D Corollary 21.18. Z[x] is a UFD. Deﬁnition 21.19. Let R be a commutative ring and let x1 , x2 , . . . , xn be indeterminates. A monomial in x1 , x2 , . . . , xn is a product of powers of x1 , x2 , . . . , xn . If I = (d1 , d2 , . . . , dn ), then let xdi i .

xI =

The degreeed of a monomial is the sum of the degrees of the indi vidual terms, di . The polynomial ring R[x1 , x2 , . . . , xn ] is equal to the set of all ﬁnite formal sums aI x I , I

with the obvious addition and multiplication. The degree of a poly nomial is the maximum degree of a monomial term that appears with non-zero coeﬃcient. Example 21.20. Let x and y be indeterminates. A typical element of Q[x, y] might be x2 + y 2 − 1. This has degree 2. Note that xy also has degree two. A more compli cated example might be 2 3 x − 7xy + y 5 , 3 a polynomial of degree 5. Lemma 21.21. Let R be a commutative ring and let x1 , x2 , . . . , xn be indeterminates. Let S = R[x1 , x2 , . . . , xn−1 ]. Then there is a natural isomorphism R[x1 , x2 , . . . , xn ] � S[xn ]. Proof. Clear.

D

To illustrate how this proceeds, it will probably help to give an ex ample. Consider the polynomial 2 3 x − 7xy + y 5 . 3 7

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Consider this as a polynomial in y, whose coeﬃcients lie in the ring R[x]. That is y 5 + (−7x)y + 2/3x3 ∈ R[x][y]. Corollary 21.22. Let R be a UFD. Then R[x1 , x2 , . . . , xn ] is a UFD. Proof. By induction on n. The case n = 1 is (21.17). Set S = R[x1 , x2 , . . . , xn−1 ]. By induction S is a UFD. But then D S[x] � R[x1 , x2 , . . . , xn ] is a UFD. Now we give a way to prove that polynomials with integer coeﬃcients are irreducible. Lemma 21.23. Let φ : R −→ S be a ring homomorphism. Then there is a unique ring homomorphism ψ : R[x] −→ S[x] which makes the following diagram commute R

φ

/

S

ψ R[x] / S[x]

and which sends x to x. Proof. Let f : R −→ S[x] be the composition of φ with the natural inclusion of S into S[x]. By the universal property of R[x], there is a unique ring homomorphism ψ : R[x] −→ S[x]. The rest is clear.

D

Theorem 21.24. (Eisenstein’s Criteria) Let f (x) = an xn + an−1 xn−1 + · · · + a0 be a polynomial with integer coeﬃcients. Suppose that there is a prime p such that p divides ai , i ≤ n − 1, p does not divide an and p2 does not divide a0 . Then f (x) is irreducible in Q[x]. 8

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. There is no harm in assuming that the content of f is one, so that by Gauss’ Lemma, it suﬃces to prove that f is irreducible over Z. Suppose not. Then we may ﬁnd two polynomials g(x) and h(x), of positive degree, with integral coeﬃcients, such that f (x) = g(x)h(x). Suppose that f (x) = an xn + an−1 xn−1 + · · · + a0 g(x) = bd xd + bd−1 xd−1 + · · · + b0 h(x) = ce xe + ce−1 xe−1 + · · · + c0 , for some n, d and e > 1. As an = bd ce and an is not divisible by p, then neither is bd nor ce . Consider the natural ring homomorphism Z −→ Fp . This induces a ring homomorphism Z[x] −→ Fp [x]. It is convenient to denote the image of a polynomial g(x) as g¯(x). As we have a ring homomorphism, ¯ (x). f¯(x) = g¯(x)h Since the leading coeﬃcient of f is not divisible by p, f¯(x) has the same degree as f (x), and the same holds for g(x) and h(x). On the other hand, every other coeﬃcient of f (x) is divisible by p, and so f¯(x) = a ¯ n xn . Since Fp is a ﬁeld, Fp is a UFD and so g¯ = ¯bd xd and g¯(x) = c¯e xe . It follows that every other coeﬃcient of g(x) and h(x) is divisible by p. In particular b0 and c0 are both divisible by p, and so, as a0 = b0 c0 , a0 must be divisible by p2 , a contradiction. D Example 21.25. Let f (x) = 2x7 − 15x6 + 60x5 − 18x4 − 9x3 + 45x2 − 3x + 6. Then f (x) is irreducible over Q. We apply Eisenstein with p = 3. Then the top coeﬃcient is not divisible by 3, the others are, and the smallest coeﬃcient is not divisible by 9 = 32 . Lemma 21.26. Let p be a prime. Then f (x) = xp−1 + xp−2 + · · · + x + 1, is irreducible over Q. 9

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. By Gauss’ Lemma, it suﬃces to prove that f (x) is irreducible over Z. First note that xp − 1 f (x) = , x−1 as can be easily checked. Consider the change of variable y = x − 1. As this induces an automorphism Z[x] −→ Z[x] by sending x to x − 1, this will not alter whether or not f is irreducible. In this case (y + 1)p − 1 f (y) = y p p−2 p p−3 p = y p−1 + y + y + ... 1 2 p−1 p−1 p−2 =y + py + · · · + p. Note that pi is divisible by p, for all 1 ≤ i < p, so that we can apply Eisenstein to f (y), using the prime p. D

10

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

22. A quick primality test Prime numbers are one of the most basic objects in mathematics and one of the most basic questions is to decide which numbers are prime (a clearly related problem is to ﬁnd the prime factorisation of a number). Given a number n one would like a quick way to decide if n is prime, say using a computer. Quick can be given a precise meaning. First one measures the complexity by the number of digits, or what comes to the same thing d = ,log nl (we will work base 2, so that this is the same as the number of bits). The larger d the longer it will take to decide if n is prime, in general. We would like an algorithm that runs in polynomial time, that is, the algorithm is guaranteed to be over in a time given by a function that is polynomial in d; in essence we are looking for an upper bound for the running time of the form c · dm , where c is some constant and m is an integer. This very famous problem was solved in 2002 by Manindra Agrawal, Neeraj Kayal and Nitin Saxena, the last two of whom were graduate students, in computer science from India The basis of their algorithm is the following simple idea. If n is a prime number then an = a mod n, for every integer 1 ≤ a ≤ n − 1. This at least gives a way to test if a number is not prime. Deﬁnition 22.1. A natural number n is called a Carmichael num ber if an = a mod n, for every integer 1 ≤ a ≤ n − 1 and yet n is not prime. Unfortunately Carmichael numbers exist; the ﬁrst such number is 561 = 3 · 11 · 17. To remedy this, the next idea is that one can test primeness if one works with polynomials, that is, if one works in Zn [x] and not just Zn . Lemma 22.2. Let n ∈ N be a natural number, n ≥ 2. Assume that a and n are coprime. Then n is prime if and only if (x + a)n = xn + a ∈ Zn [x]. Proof. If n is prime then the map φ : Zn [x] −→ Zn [x]

given by

φ(f ) = f n ,

is a ring homomorphism. In particular (x + a)n = φ(x + a) = φ(x) + φ(a) = xn + an = xn + a ∈ Zn [x]. 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Now suppose that n is composite. Pick a prime p that divides m and suppose that n = pk m, where m is coprime to n. The coeﬃcient of xp is n c = an−p . p Now p does not divide an−p and pk does not divide not divide c. Therefore the coeﬃcient of xp in

n p

, so that pk does

(x + a)n − xn − a is not zero in Zn .

D

So now we have a way to test if n is prime. Pick an integer 1 < a < n. If a and n are not coprime then n is composite. Otherwise compute (x + a)n − xn − a ∈ Zn [x]. The problem is that this takes way too long. There are n coeﬃcients to compute, and n is exponential in d, not polynomial. To remedy this, pick a small (for example, something that is poly nomial in d) natural number r. Let I = (xr − 1, n) < Z[x], the ideal generated by xr − 1 and n inside the ring Z[x] and let R=

Z[x] Zn [x] = r , I (x − 1)

the quotient ring. Computations in R proceed much faster than in Zn [x], since we only need to compute r coeﬃcients and not n. Obvi ously if n is a prime number then (x + a)n = xn + a ∈ R, The problem is that even if n is composite, there might be a handful of a and r such that (x + a)n = xn + a ∈ R. The trick is to ﬁnd bounds on a and r which only depend on d. Recall that ϕ(r) denotes the cardinality of Ur , that is, the number of integers between 1 and r − 1 coprime to r. We will use the notation g(d) ∼ O(f (d)) if there is a constant c such that g(d) ≤ c · f (d) for all d ∈ N. In terms of running time, note that it takes O(d) time to add, mul tiply or divide two numbers with d digits (aka bits); more generally 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

it takes O(d · m) time to add, multiply or divide two polynomials of degree m with coeﬃcients with at most d digits. We will need a simple result in number theory whose proof we omit: Lemma 22.3. The product of all prime numbers r between N and 2N is greater than 2N for all N ≥ 1. Here is the algorithm: Algorithm 1 Algorithm for primality testing

Require: integer n > 1 1: if n = ab for a ∈ N and b > 1 then n is composite. 2: Find the smallest integer r such that the order of n in Zr is greater than d2 . 3: if the gcd of a and n lies between 1 and n, for some 1 ≤ a ≤ r then n is composite. 4: if n ≤ r then n is prime. n 5: for a = 1 to l ϕ(r)dJ do 6: if (x + a)n = xn + a ∈ R then n is composite. 7: n is prime. By (22.7) r ≤ d5 so that step 4 is relevant only if n ≤ 5, 690, 034. Let us check that this algorithm works. There are two issues. How long will it take this algorithm to run and why does it give the right answer? Theorem 22.4. Algorithm (1) takes no longer than O(d21/2 ) to run. Theorem 22.5. Algorithm (1) returns composite if n is composite and algorithm (1) returns prime if n is prime. Half of (22.5) is easy: Lemma 22.6. If n is prime then algorithm (1) returns prime. Proof. If n is prime then the conditions in steps (1), (3) and (5) will never be satisﬁed; so eventually steps (4) or (6) will tell us n is prime. D To prove (22.5) it remains to show that if the algorithm returns prime then in fact n is prime. If the condition in step 4 is satisﬁed then it is clear that n is prime, since if we got to step 4 then every integer a less than n is coprime to n. So to check (22.5) we may assume that we get to step 6. We will need a result about the size of r for both results: 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Lemma 22.7. There is a prime d5 ≤ r ≤ 2d5 for which the order of n in Ur is greater than d2 . Proof. Suppose not. Then the order of n in Ur is always at most d2 , for every prime r between N = d5 and 2N . In particular the order of n in Ur is i, for some 1 ≤ i ≤ d2 , so that r divides ni − 1. Therefore the product of the primes r between N and 2N divides the product of ni − 1 between 1 and d2 . 2N ≤

r N ≤r≤2N

(ni − 1)

≤ 1≤i≤d2

1 2 2 (d +1)

< n2d ≤ 2N , a contradiction.

D

Note a useful trick to compute powers quickly. Suppose we want to compute 316 . First we compute 32 = 3 · 3. Then we compute 34 = 32 · 32 , then 38 = 34 · 34 and then 316 = 38 · 38 . This involves four multiplications, rather than 15. To compute an , write down n in binary, n = ad 2d + ad−1 2d−1 + · · · + a0 . i

Now compute the powers a2 , 1 ≤ i ≤ d and take the product over those i such that ai = 1. For example, consider computing 513 , 13 = 23 + 22 + 1

so that we compute

513 = 58 · 54 · 5.

Proof of (22.4). In the ﬁrst step the number of digits of a is no more than d/b. Given a and b it takes no more than d2 computations to calculate ab . So the ﬁrst step takes at most O(d3 ) time to run. In the second step we need to ﬁnd an integer r so that the order of n in Ur is at least d2 . Given r we just need to compute ni modulo r for every i ≤ d2 . This will take at most O(d2 · log r) computations. By (22.7) we need only check O(d5 ) diﬀerent values for r. So step 2 takes no longer than O(d7 ). The third step involves computing the gcd of two numbers; each gcd takes only time O(d). The third step takes O(d6 ). Step 4 takes time O(d). n Step 5 involves ϕ(r)d iterations. Each iteration in step 6 involves checking an equation involving polynomials of degree r with coeﬃcients 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

of size d, where we multiply d polynomials; each equation therefore takes O(rd2 ) time to check. Step 5 then takes n O(r ϕ(r)d3 ) = O(r3/2 d3 ) = O(d21/2 ). It is clear that step 5 dominates all other steps and so this is the time it takes for the algorithm to run. D It is interesting to observe that in practice Algorithm (1) isn’t the right way to check if n is prime. There is a probabilistic algorithm which runs much faster. We will need the following useful way to characterise prime numbers: Lemma 22.8. Let n ∈ N be an odd natural number n ≥ 3. Then n is prime if and only if 1 has precisely two square roots in the ring R = Zn . Moreover if n is composite we can ﬁnd at least four square roots, all of which are coprime to n. Proof. Note that square roots of 1 correspond to roots of the polynomial x2 − 1 ∈ R[x]. Note also that ±1 (which are distinct, as n > 2) are two roots of x2 − 1. Suppose that n is prime. Then Fn = Zn is a ﬁeld and the quadratic polynomial x2 − 1 has at most two roots. Therefore it has exactly two roots. Now suppose that n is composite. Then n = kl where k and l are coprime, not equal to one. By the Chinese remainder theorem, Zn = Zkl r Zk ⊕ Zl . But then (±1, ±1) are at least four diﬀerent roots of x2 − 1.

D

So here is the probabilistic algorithm. Suppose we are given an odd natural number n > 2. Pick an integer 1 ≤ a ≤ n at random. If a and n aren’t coprime then n is composite; we can check this very quickly. Otherwise compute an−1

mod n − 1.

It this is not 1 then n is not prime by Fermat. As n is odd, n−1 m= 2 is an integer. Let b = am ∈ R. We can compute b quickly. Also b2 = a2m = an−1 = 1 ∈ R, 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

so that b is a square root of 1. If b = 6 ±1 then n is composite. If m is even and b = 1 then let m l= , 2 and compute c = al ∈ R. 6 pm1 then n is composite. Then c2 = b = 1 so that if c = We can keep going until what remains is odd. If at any stage we ﬁnd a square root of 1 which is not ±1 then we call a a witness. It is not hard to see that at least half of the numbers 1 ≤ a ≤ n coprime to n are witnesses. So if we pick a hundred numbers a at random and none of them are witnesses then n is prime with probability 1 . 2100 Since it is now more likely that the sun will explode in ﬁve minutes (in which case we won’t care if n is prime) or more practically that the computer will make an error, than n is composite, we can safely assume that n is prime. Of course this probabilistic algorithm is a little unsatisfying. The AKS test runs slower but at least we know that n is prime. However note that if we ﬁnd a witness then we know that n is not prime. One can adapt the AKS test to give a fast probabilistic algorithm to show that if n is prime then the algorithm will tell us n is prime with probability 1/2 (let’s say). So now keep alternating between the two probabilistic algorithms. With vanishingly small probability, one algorithm will tell us that n is either prime or composite. It is perhaps instructive to consider how both algorithms work in a concrete example. In terms of cryptography the most interesting examples are when n is a product of two diﬀerent primes. Let’s take p = 19 and q = 53. Then n = 1007. We check if n is prime. First the probabilistic algorithm. We pick a random integer 1 ≤ a ≤ n. Let’s take the random integer a = 2. Now

1007 = 29 + 28 + 27 + 26 + 25 + 23 + 22 + 2 + 1 (google is your friend for this). In particular d = 9. Now 2

3

4

22 = 2 · 2 = 4, 22 = 16, 22 = 256, 22 = 81, 5

6

7

8

9

22 = 519, 22 = 492, 22 = 384, 22 = 434, 22 = 47. Thus 2503 = 2 ∗ 4 ∗ 16 ∗ 81 ∗ 519 ∗ 492 ∗ 384 ∗ 434 ∗ 47 = 493, 6

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

which is not 1. Thus n is composite. If 2 didn’t work, then we would try 3, etc. In the case of the ﬁrst Carmichael number, 561, one can check that 2230 = 166, so that 2 is a witness. For the AKS-test, we ﬁrst test that n is not a pure power. The square root of 1007 is less than 32. It is easy to see that n is not a square. The cube of 10 is 1000, and so it is easy to see that n is not a cube. If n is a fourth power it is a square. The ﬁfth root of 1007 is less than 5; 45 = 1024 6= n. The seventh root of n is less than 3, so that n is not a pure power. Next we ﬁnd a prime r such that the order of n in Ur is at least d2 . 83 is a prime bigger than 81 = 92 . If the order of n = 1007 in U83 , the non-zero elements of the ﬁeld F83 , is not 82 = 2 ∗ 41, it must be either 2 or 41. 1007 = 11, modulo 83. 10072 = 38 so that the order of 1007 is not 2 in U83 . But it is 41. Let’s try 89. 89 − 1 = 88 = 8 · 11. 1007 modulo 89 is 28. 288 = 39

and

2811 = 37,

so that 1007 has order 88 > 81 in U89 , the non-zero elements of the ﬁeld F89 . As r is prime, φ(r) = r − 1, so that φ(89) = 88. Next we check that n is coprime to every number less than 88. It isn’t so we are done. If we ignore this and assume we get to the next step, we are supposed to compute Z1007 [x] , (x + a)n − xn − a ∈ 89 (x − 1) √ for every 1 ≤ a ≤ l 889J = 84. Exercise for the reader. Let us now turn to the proof of (22.5). By (22.6) we may assume that r is prime. We want to prove: Proposition 22.9. Let n > 1 be an odd natural number which is not a pure power such that n has no prime factors less than a prime number r, the order of n in Fr is at least d2 , where d = ,log nl and (x + a)n = xn + a ∈ R √ for every integer 1 ≤ a ≤ rn = A, where R is the ring R=

Z[x] . (n, xr − 1) 7

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Then n is prime. We will assume that n is composite and derive a contradiction. Sup pose that p is a prime dividing n. Then we have (x + a)n = xn + a, in the smaller ring Z[x] Fp [x] = r . r (x − 1) (p, x − 1) Now Fp [x] is a UFD. So the polynomial xr − 1 factors into irreducible polynomials which are prime. Let h(x) ∈ Fp [x] be a prime polynomial dividing xr − 1. Then F=

Z[x] Fp [x] = , (p, h(x)) (h(x))

is an integral domain, since the ideal (h(x)) is prime. On the other hand, it is easy to see that F has ﬁnitely many elements, so that F is a ﬁnite ﬁeld. Deﬁnition 22.10. Let G be a group. The exponent of G is the least common multiple of the orders of the elements of G. Lemma 22.11. Let G be a ﬁnite abelian group of order n. Then the exponent m of G is smallest value of r such that g r = e. In particular m = n iﬀ G is cyclic. Proof. By the classiﬁcation of ﬁnitely generated abelian groups, we may ﬁnd integers m1 , m2 , . . . , mk such that G r Zm1 × Zm2 × . . . Zmk , where mi divides mi+1 . In this case it is clear that m = mk .

D

Lemma 22.12. Let G be a ﬁnite subgroup of the multiplicative group of a ﬁeld F . Then G is cyclic. Proof. Let m be the exponent of G and let n be the order of G. Now G is abelian as F is a ﬁeld. Thus m ≤ n and for every element α of G, αm = 1, so that every element of G is a root of the polynomial xm − 1 ∈ F [x]. But a polynomial of degree m has at most m roots, and so n ≤ m. But then m = n and G is cyclic. D 8

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Thus F∗ is a cyclic group. Let G be the subgroup of F∗ generated by x, x + 1, x + 2, . . . , x + A. The trick is to give lower and upper bounds for the size of G; it is then a relatively easy matter to check these bounds are incompatible. Note that G has lots of generators and relatively few relations; this is because the order of n ∈ Ur is relatively large. It is therefore not too hard to give lower bounds for the size of G. To give upper bounds on the size of G is more complicated; the idea is to generate lots of identities, deduced from knowing that (x + a)n = xn + a ∈ F.

9

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

23. Group actions and automorphisms Recall the deﬁnition of an action: Deﬁnition 23.1. Let G be a group and let S be a set. An action of G on S is a function G × S −→ S

denoted by

(g, s) −→ g · s,

such that e·s=s

(gh) · s = g · (h · s)

and

In fact, an action of G on a set S is equivalent to a group homomor phism (invariably called a representation) ρ : G −→ A(S). Given an action G × S −→ S, deﬁne a group homomorphism ρ : G −→ A(S)

by the rule

ρ(g) = σ : S −→ S,

where σ(s) = g · s. Vice-versa, given a representation (that is, a group homomorphism) ρ : G −→ A(S), deﬁne an action G · S −→ S

by the rule

g · s = ρ(g)(s).

It is left as an exercise for the reader to check all of the details. The only sensible way to understand any group is let it act on some thing. Deﬁnition-Lemma 23.2. Suppose the group G acts on the set S. Deﬁne an equivalence relation ∼ on S by the rule s∼t

if and only if

g·s=t

for some g ∈ G.

The equivalence classes of this action are called orbits. The action is said to be transitive if there is only one orbit (neces sarily the whole of S). Proof. Given s ∈ S note that e · s = s, so that s ∼ s and ∼ is reﬂexive. If s and t ∈ S and s ∼ t then we may ﬁnd g ∈ G such that t = g · s. But then s = g −1 · t so that t ∼ s and ∼ is symmetric. If r, s and t ∈ S and r ∼ s, s ∼ t then we may ﬁnd g and h ∈ G such that s = g · r and t = h · s. In this case t = h · s = h · (g · r) = (hg) · r, so that t ∼ r and ∼ is transitive.

D 1

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Deﬁnition-Lemma 23.3. Suppose the group G acts on the set S. Given s ∈ S the subset H = { g ∈ G | g · s = s }, is called the stabiliser of s ∈ S. H is a subgroup of G. Proof. H is non-empty as it contains the identity. Suppose that g and h ∈ H. Then (gh) · s = g · (h · s) = g · s = s. Thus gh ∈ H, H is closed under multiplication and so H is a subgroup of G. D Example 23.4. Let G be a group and let H be a subgroup. Let S be the set of all left cosets of H in G. Deﬁne an action of G on S, G × S −→ S as follows. Given gH ∈ S and g ' ∈ G, set g ' · (gH) = (g ' g)H. It is easy to check that this action is well-deﬁned. Clearly there is only one orbit and the stabiliser of the trivial left coset H is H itself. Lemma 23.5. Let G be a group acting transitively on a set S and let H be the stabiliser of a point s ∈ S. Let L be the set of left cosets of H in G. Then there is an isomorphism of actions (where isomorphism is deﬁned in the obvious way) of G acting on S and G acting on L, as in (23.4). In particular |G| |S| = . |H| Proof. Deﬁne a map f : L −→ S by sending the left coset gH to the element g · s. We ﬁrst have to check that f is well-deﬁned. Suppose that gH = g ' H. Then g ' = gh, for some h ∈ H. But then g ' · s = (gh) · s = g · (h · s) = g · s. Thus f is indeed well-deﬁned. f is clearly surjective as the action of G is transitive. Suppose that f (gH) = f (gH). Then gS = g ' s. In this case h = g −1 g ' stabilises s, so that g −1 g ' ∈ H. But then g and g ' are 2

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

in the same left coset and gH = g ' H. Thus f is injective as well as surjective, and the result follows. D Given a group G and an element g ∈ G recall the centraliser of g in G is Cg = { h ∈ G | hg = gh }. The centre of G is then Z(G) = { h ∈ H | gh = hg }, the set of elements which commute with everything; the centre is the intersection of the centralisers. Lemma 23.6 (The class equation). Let G be a group. The cardinality of the conjugacy class containing g ∈ G is the index of the centraliser, [G : Cg ]. Further |G| = |Z(G)| +

[G : Cg ], [G:Cg ]>1

where the second sum run over those conjugacy classes with more than one element. Proof. Let G act on itself by conjugation. Then the orbits are the conjugacy classes. If g ∈ then the stabiliser of g is nothing more than the centraliser. Thus the cardinality of the conjugacy class containing g is [G : Cg ] by (23.3). If g ∈ G is in the centre of G then the conjugacy class containing G has only one element, and vice-versa. As G is a disjoint union of its conjugacy classes, we get the second equation. D Lemma 23.7. If G is a p-group then the centre of G is a non-trivial subgroup of G. In particular G is simple if and only if the order of G is p. Proof. Consider the class equation |G| = |Z(G)| +

[G : Cg ]. [G:Cg ]>1

The ﬁrst and last terms are divisible by p and so the order of the centre of G is divisible by p. In particular the centre is a non-trivial subgroup. If G is not abelian then the centre is a proper normal subgroup and G is not simple. If G is abelian then G is simple if and only if its order is p. D Theorem 23.8. Let G be a ﬁnite group whose order is divisible by a prime p. Then G contains at least one Sylow p-subgroup. 3

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Proof. Suppose that n = pk m, where m is coprime to p. Let S be the set of subsets of G of cardinality pk . Then the cardi nality of S is given by a binomial � � n pk m(pk m − 1)(pk m − 2) . . . (pk m − pk + 1) = pk (pk − 1) . . . 1 pk Note that for every term in the numerator that is divisible by a power of p, we can match this term in the denominator which is also divisible by the same power of p. In particular the cardinality of S is coprime to p. Now let G act on S by left translation, G × S −→ S

where

(g, P ) −→ gP.

Then S is breaks up into orbits. As the cardinality is coprime to p, it follows that there is an orbit whose cardinality is coprime to p. Suppose that X belongs to this orbit. Pick g ∈ X and let P = g −1 X. Then P contains the identity. Let H be the stabiliser of P . Then H ⊂ P , since h · e ∈ P . On the other hand, [G : H] is coprime to p, so that the order of H is divisible by pk . It follows that H = P . But then P is a Sylow p-subgroup. D Question 23.9. What is the automorphism group of Sn ? Deﬁnition-Lemma 23.10. Let G be a group. If a ∈ G then conjugation by G is an automorphism σa of G, called an inner automorphism of G. The group G' of all inner automor phisms is isomorphic to G/Z, where Z is the centre. G' is a normal subgroup of Aut(G) the group of all automorphisms and the quotient is called the outer automorphism group of G. Proof. There is a natural map ρ : G −→ Aut(G), whose image is G' . The kernel is isomorphic to the centre and so G' r G/Z, by the ﬁrst Isomorphism theorem. It follows that G' ⊂ Aut(G) is a subgroup. Suppose that φ : G −→ G is any automorphism of G. I claim that φσa φ−1 = σφ(a) . 4

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

Since both sides are functions from G to G it suﬃces to check they do the same thing to any element g ∈ G. φσa φ−1 (g) = φ(aφ−1 (g)a−1 ) = φ(a)gφ(a)−1 = σφ(a) (g). '

Thus G is normal in Aut(G).

D

Lemma 23.11. The centre of Sn is trivial unless n = 2. Proof. Easy check.

D

Theorem 23.12. The outer automorphism group of Sn is trivial unless n = 6 when it is isomorphic to Z2 . Lemma 23.13. If φ : Sn −→ Sn is an automorphism of Sn which sends a transposition to a transposition then φ is an inner automorphism. Proof. Since any automorphism permutes the conjugacy classes, φ sends transpositions to transpositions. Suppose that φ(1, 2) = (i, j). Let a = (1, i)(2, j). Then σa (i, j) = (1, 2) and so σa φ ﬁxes (1, 2). It is ob viously enough to show that σa φ is an inner automorphism. Replacing φ by σa φ we may assume φ ﬁxes (1, 2). Now consider τ = φ(2, 3). By assumption τ is a transposition. Since (1, 2) and (2, 3) both move 2, τ must either move 1 or 2. Suppose it moves 1. Let a = (1, 2). Then σa φ still ﬁxes (1, 2) and σa τ moves 2. Replacing φ by σa φ we may assume τ = (2, i), for some i. Let a = (3, i). Then σa φ ﬁxes (1, 2) and (2, 3). Replacing φ by σa φ we may assume φ ﬁxes (1, 2) and (2, 3). Continuing in this way, we reduce to the case when φ ﬁxes (1, 2), (2, 3), . . . , and (n − 1, n). As these transpositions generate Sn , φ is then the identity, which is an inner automorphism. D Lemma 23.14. Let σ ∈ Sn be a permutation. If (1) σ has order 2, (2) σ is not a tranposition, and (3) the conjugacy class generated by σ has cardinality � � n , 2 then n = 6 and σ is a product of three disjoint tranpositions. Proof. As σ has order two it must be a product of k disjoint tranposi tions. The number of these is � �� � � � 1 n n−2 n − 2k + 2 ... . k! 2 2 2 5

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

For this to be equal to the number of transpositions we must have

� �� � � � � � 1 n n−2 n − 2k + 2 n ... = , k! 2 2 2 2 that is � � n k . n! = 2 (n − 2k)!k! 2 It is not hard to check that the only solution is k = 3 and n = 6. D Note that if there is an outer automorphism of S6 , it must switch transpositions with products of three disjoint transpositions. So the outer automorphism group is no bigger than Z2 . The ﬁnal thing is to actually write down an outer automorphism. This is harder than it might ﬁrst appear. Consider the complete graph K 5 on 5 vertices. There are six ways to colour the edges two colours, red and blue say, so that we get two 5-cycles. Call these colourings magic. S5 acts on the vertices of K 5 and this induces an action on the six magic colourings. The induced representation is a group homomor phism i : S5 −→ S6 , which it is easy to see is injective. One can check that the tranposition (1, 2) is sent to a product of three disjoint tranpositions. But then S6 acts on the left cosets of i(S5 ) in S6 , so that we get a representation φ : S6 −→ S6 , which is an outer automorphism.

6

MIT OCW: 18.703 Modern Algebra

Prof. James McKernan

MIT OpenCourseWare http://ocw.mit.edu

18.703 Modern Algebra Spring 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.