From root systems to Dynkin diagrams Heiko Dietrich Abstract. We describe root systems and their associated Dynkin diagrams; these notes follow closely the book of Erdman & Wildon ( Introduction to Lie algebras, 2006) and lecture notes of Willem de Graaf (Italy). We briefly describe how root systems arise from Lie algebras. 1. Root systems 1.1. Euclidean spaces. Let V be a finite dimensional Euclidean space, that is, a finite dimensional R-space with inner product (, ): V V R, which is bilinear, symmetric, and positive definite. The length of v V is v = (v, v); the angle α between two non-zero v, w V is defined by cos α = (v,w) v w. If v V is non-zero, then the hyperplane perpendicular to v is H v = {w V (w, v) = 0}. The reflection in H v is the linear map s v : V V which maps v to v and fixes every w H v ; recall that V = H v Span R (v), hence In the following, for v, w V we write s v : V V, w w w, v = 2(w, v) (v, v) ; 2(w, v) (v, v) v. note that, is linear only in the first component. An important observation is that each s u leaves the inner product invariant, that is, if v, w V, then (s u (v), s u (w)) = (v, w). We use this notation throughout these notes. 1.2. Abstract root systems. Definition 1.1. A finite subset Φ V is a root system for V if the following hold: (R1) 0 / Φ and Span R (Φ) = V, (R2) if α Φ and λα Φ with λ R, then λ {±1}, (R3) s α (β) Φ for all α, β Φ, (R4) α, β Z for all α, β Φ. The rank of Φ is dim(v ). Note that each s α permutes Φ, and if α Φ, then α Φ. Lemma 1.2. If α, β Φ with α ±β, then α, β β, α {0, 1, 2, 3}. Proof. By (R4), the product in question is an integer. If v, w V \ {0}, then (v, w) 2 = v w cos 2 (θ) where θ is the angle between v and w, thus α, β β, α = 4 cos 2 θ 4. If cos 2 (θ) = 1, then θ is a multiple of π, so α and β are linearly dependent, a contradiction. 1
2 Heiko Dietrich If α, β Φ with α ±β and β α, then β, α α, β ; by the previous lemma, all possibilities are listed in Table 1.2. (β,β) (α,α) α, β β, α θ 0 0 π/2 1 1 π/3 1-1 -1 2π/3 1 1 2 π/4 2-1 -2 3π/4 2 1 3 π/6 3-1 -3 5π/6 3 Table 1. Angles between root vectors. Let α, β Φ with (α, β) 0 and (β, β) (α, α). Recall that s β (α) = α α, β β Φ, and α, β = ±1, depending on whether the angle between α and β is obtuse or acute, see Table 1. Thus, if the angle is > π/2, then α + β Φ; if the angle is < π/2, then α β Φ. Example 1.3. We construct all root systems Φ of R 2. Suppose α Φ is of shortest length and choose β Φ such that the angle θ {π/2, 2π/3, 3π/4, 5π/6} between α and β is as large as possible. This gives root systems of type A 1 A 1, A 2, B 2, and G 2, respectively: β α β α β α β α A 1 A 1 A 2 B 2 G 2 Definition 1.4. A root system Φ is irreducible if it cannot be partitioned into two nonemtpy subsets Φ 1 and Φ 2, such that (α, β) = 0 for all α Φ 1 and β Φ 2. Lemma 1.5. If Φ is a root system, then Φ = Φ 1... Φ k, where each Φ i is an irreducible root system for the space V i = Span R (Φ i ) V ; in particular, V = V 1... V k. Proof. For α, β Φ write α β if and only if there exist γ 1,..., γ s Φ with α = γ 1, β = γ s, and (γ i, γ i+1 ) 0 for 1 i < s; then is an equivalence relation on Φ. Let Φ 1,..., Φ k be the equivalence classes of this relation. Clearly, (R1), (R2), and (R4) are satisfied for each Φ k and V k = Span R (Φ k ). To prove (R3), consider α Φ k and β Φ k ; if (α, β) = 0, then s α (β) = β Φ k. If (α, β) 0, then (α, s α (β)) 0 since s α leaves the inner product invariant; thus, s α (β) α, and s α (β) Φ k. In particular, Φ k is an irreducible root system of V k. Clearly, every root appears in some V i, and the sum of V i spans V. If v 1 +... + v k = 0 with each v i V i, then 0 = (v 1 +... + v k, v j ) = (v j, v j ) for all j, that is, v j = 0 for all j.
From root systems to Dynkin diagrams 3 1.3. Bases of root systems. Let Φ be a root system for V. Definition 1.6. A subset Π Φ is a base (or root basis) for Φ if the following hold: (B1) Π is a vector space basis for V, (B2) every α Φ can be written as α = β Π k ββ with either all k β N or all k β N. A root α Φ is positive with respect to Π if the coefficients in (B2) are positive; otherwise α is negative. The roots in Π are called simple roots; the reflections s β with β Π are simple reflections. We need the notion of root orders to prove that every root system has a base. Definition 1.7. A root order is a partial order > on V such that every α Φ satisfies α > 0 or α > 0, and > is compatible with addition and scalar multiplication. Lemma 1.8. Let Φ be a root system of V. a) Let {v 1,..., v l } be a basis of V and write v > 0 if and only if v = l i=1 k iv i and the first non-zero k i is positive; define v > w if v w > 0. Then > is the lexicographic root order with respect to the ordered basis {v 1,..., v l }. b) Choose v 0 V outside the (finitely many) hyperplanes H α, α Φ. For u, v V write u > v if and only if (u, v 0 ) > (v, v 0 ). Then > is the root order defined by v 0. Let > be any root order; call α Φ positive if α > 0, and negative otherwise; α is simple if α > 0 and it cannot be written as a sum of two positive roots. The proof of the following theorem shows that the set Π of all simple roots is a base for Φ: Theorem 1.9. Every root system has a base. Proof. Let > be a root order with set of simple roots Π; we show that Π is a base. First, let α, β Π with α β: If α β is a positive root, then α = (α β) + β, and α is not simple; if α β is a negative root, then β = (α β) + α is not simple; thus, α β / Φ. This implies that the angle of α and β is at least π/2, hence (α, β) 0 as seen in Table 1. Second, Π is linearly independent: if not, then there exist pairwise distinct α 1,..., α k Π with α 1 = k i=2 k iα i = β + + β, where β + and β are the sums of all k i α i with k i positive and negative, respectively. By construction, (β +, β ) 0 since each (α i, α j ) 0. Note that β + 0 since α 1 > 0, thus (β +, β + ) > 0 and (α 1, β + ) = (β +, β + ) + (β, β + ) > 0. On the other hand, (α 1, α j ) 0 and the definition of β + imply that (α 1, β + ) 0, a contradiction. Finally, we show that every positive root α Φ is a linear combination of Π with coefficients in N. Clearly, this holds if α Π. If α / Π, then α = β + γ for positive roots β, γ Φ with α > β, γ. By induction, β and γ are linear combinations of Π with coefficients in N. Corollary 1.10. Let Π = {α 1,..., α l } be a root basis of Φ. a) If > is the lexicographic order on V with respect to the basis Π of V, then Π is the set of simple roots with respect to > ; thus, every base is defined by some root order. b) If v 0 V with (v 0, α) > 0 for all α Π, then Π is the set of simple roots with respect to the root basis defined by v 0.
4 Heiko Dietrich Proof. a) This is obvious. b) Denote by > the root order defined by v 0. Let α j Π; clearly, α j > 0. Suppose α j = β+γ for some β, γ Φ with β, γ > 0. Write β = l i=1 k iα i and γ = l i=1 h iα i, thus k j + h j = 1 and k i = h i if i j. Recall that either k 1,..., k l 0 or k 1,..., k l 0; by definition, each (v 0, α k ) > 0, thus (v 0, β) > 0 implies k 1,..., k l 0. Analogously, (v 0, γ) > 0 forces h 1,..., h l 0. Thus, if i j, then h i = k i implies h i = k i = 0. Now β = k j α j and γ = h j α j yield k j, h j {±1}. But h j + k j = 1, which is not possible. Thus α j must be simple. The proof of Theorem 1.9 also implies the following. Corollary 1.11. If α, β Φ are distinct simple roots, then (α, β) 0. If Π is a root base of Φ, then α Φ is positive with respect to Π if α is positive with respect to the root order which defines Π; write Φ + and Φ for the set of positive and negative roots, respectively. Note that Φ = Φ + and Φ = Φ + Φ. We remark that root bases can also be constructed geometrically: Fix a hyperplane in V which intersects Φ trivially; label the roots on one side of the hyperplane positive, the other negative. Define Π to be the set of positive roots which are nearest to the hyperplane. 1.4. Weyl group. Let Φ be a root system with ordered root base Π = {α 1,..., α l }. Definition 1.12. The Weyl group of Φ is the subgroup of linear transformations of V generated by all reflections s α with α Φ, that is, W = W (Φ) = s α α Φ. Lemma 1.13. The Weyl group W of Φ is finite. Proof. By (R3), there is a group homomorphism ϕ: W Sym(Φ). Since Φ contains a basis of V, the kernel of ϕ is trivial, hence W = ϕ(w ) Sym(Φ) is finite. Theorem 1.14. Let W 0 be the subgroup of W generated by the simple reflections s α1,..., s αl. a) Each s αi permutes the positive roots other than α i. b) If β Φ, then β = g(α) for some g W 0 and α Π. c) We have W = W 0. Proof. a) Let β Π with β α i be positive; write β = l i=1 k iα i with all k i 0. Since β α i, there must be k j > 0 for some j i. The coefficient of α j in s αi (β) = β β, α i α i still is k j > 0, hence s α (β) is positive. b) We first consider β Φ + and show that β = g(α) for some g W 0 and α Π. The assertion follows by induction on the height of β defined as ht(β) = γ Π k γ where β = γ Π k γγ. If ht(β) = 1, then choose g = 1 and α = β Π; if ht(β) 2, then, by (R2), at least two k γ must be strictly positive. Suppose (β, γ) 0 for all γ Π. Then (β, β) = γ Π k γ(β, γ) 0, a contradiction to β 0. Thus, there is γ Π with (β, γ) > 0, and so ht(s γ (β)) = ht(β) β, γ < ht(β).
From root systems to Dynkin diagrams 5 Recall that s γ (β) is positive; by the induction hypothesis, s γ (β) = g (α) for some g W 0 and α Π, hence β = g(α) with g = s γ g W 0. Negative roots are dealt with analogously. c) We have to show that s β W 0 for every β Φ. Part b) shows that β = g(α) for some g W 0 and α Π, and one can show that s β = g s α g 1, which lies in W 0. Corollary 1.15. The root system Φ is completely determined by a base Π. Theorem 1.16. If Π and Π are two root bases of Φ, then g(π) = Π for some g W. Proof. Consider Π and Π as the simple roots with respect to root orders defined by v 0 V and v 0 V, respectively. The Weyl vector with respect to Π is ρ = 1 2 β β where β runs over all roots which are positive with respect to Π; similarly, ρ is defined with respect to Π. Since s α with α Π permutes the positive roots other than α, we have s α (ρ ) = ρ α. Since W (Φ) is finite, we can choose w W (Φ) such that (w(v 0 ), ρ ) is maximal. Now, if α Π, then (w(v 0 ), ρ ) (s α (w(v 0 )), ρ ) (by the choice of w) = (w(v 0 ), s α (ρ )) (since s 2 α = 1 and s α preserves the inner product) = (w(v 0 ), ρ α) = (w(v 0 ), ρ ) (w(v 0 ), α). Thus, (w(v 0 ), α) 0 for all α Π. If (w(v 0 ), α) = 0, then (v 0, w 1 (α)) = 0, which is impossible as (v 0, β) 0 for all β Φ by the definition of v 0. It follows from Corollary 1.10 that Π is the set of simple roots with respect to the root order defined by w(v 0 ). If β Π, then (w(β), w(v 0 )) = (β, v 0 ) > 0. Thus, both w(π) and Π are root bases with respect to the root order defined by w(v 0 ). It follows that w(π) = Π. 1.5. Cartan matrices. Let Φ be a root system with ordered base Π = {α 1,..., α l }. Definition 1.17. The Cartan matrix of Φ with respect to Π is the l l matrix C = (C i,j ) 1 i,j l where C ij = α i, α j {0, ±1, ±2, ±3}. Note that each diagonal entry of a Cartan matrix is 2. Since (s α (u), s α (v)) = (u, v) for all α Φ, Theorem 1.16 shows that the Cartan matrix of Φ depends only on the ordering adopted with the chosen base Π, and not on the base itself. If C and C are two Cartan matrices of a root system Π, then they are equivalent (and we write C C ) if and only if there is a permutation σ Sym(n) with C i,j = C σ(i),σ(j) for all 1 i, j l. We show that a Cartan matrix basically determines the root system; we first need more notation. Definition 1.18. Let Φ and Φ be root systems of V and V, respectively. Then Φ and Φ are isomorphic (and we write Φ = Φ ) if there is a vector space isomorphism ϕ: V V such that ϕ(φ) = Φ and α, β = ϕ(α), ϕ(β) for all α, β Φ. An isomorphism of root systems preserves angles between root vectors; it does not necessarily preserve distances as the map v λv induces an isomorphism between Φ and {cα α Φ}. Lemma 1.19. Let Φ and Φ be root systems with Cartan matrices C and C, respectively. Then Φ = Φ if and only if C C.
6 Heiko Dietrich Proof. Let Π and Π be root bases which define C and C, respectively. First, suppose there is an isomorphism ϕ: Φ Φ of root systems. Since ϕ(π) is a base of Φ, there is w W (Φ ) with ϕ(π) = w(π ), see Theorem 1.16. Clearly, Π and ϕ(π) define the same Cartan matrix C, and the Cartan matrix of w(π ) is equivalent to the Cartan matrix C of Π, thus C C. Second, suppose C C. Up to reordering the simple roots, we can assume that C = C, defined by Π = {α 1,..., α l } and Π = {α 1,..., α l }; thus, α i, α j = α i, α j for all i, j. Let ϕ: V V be the linear map defined by ϕ(α i ) = α i for all i. By definition, this is a vector space isomorphism which satisfies ϕ(π) = Π and α, β = ϕ(α), ϕ(β) for all α, β Φ. It remains to show that ϕ(φ) = Φ. If v V and α i Π, then v, α i = ϕ(v), α i follows from the definition of ϕ and the fact that, is linear in the first component. This implies ϕ(s αi (v)) = ϕ(v) v, α i α = s α i (ϕ(v)). Thus, the image under ϕ of the orbit of v V under the Weyl group W (Φ) is contained in the orbit of ϕ(v) under W (Φ ). Since Φ = {w(α) w W 0, α Π}, see Theorem 1.14b), and ϕ(π) = Π, it follows that ϕ(φ) Φ. The same argument applied to ϕ 1 shows ϕ 1 (Φ ) Φ, hence ϕ(φ) = Φ. In conclusion, Φ = Φ. 1.6. Dynkin diagrams. Let Φ be a root system with ordered base Π = {α 1,..., α l }. Definition 1.20. The Dynkin diagram of Φ (with respect to Π) has vertices α 1,... α l, and there are d ij = α i, α j α j, α i {0, 1, 2, 3} edges between α i and α j with i j; if α j > α i, then these edges are directed, pointing to the shorter root α i. The same graph, but without directions, is the Coxeter graph of Φ. If there is a single edge between α and β, then α = β and the edge is undirected, see Table 1; if there are multiple edges between, then α β and the edges are directed. Theorem 1.21. Two root systems are isomorphic if and only if their Dynkin diagrams are the same (up to relabeling the vertices). Proof. By Lemma 1.19, isomorphic root systems have similar Cartan matrices, and the entries of a Cartan matrix define the Dynkin diagram. Thus, up to relabeling the simple roots, the Dynkin diagrams are the same. Conversely, from a Dynkin diagram one can recover the values α i, α j for all 1, i, j l; recall that α i, α j < 0, and Table 1 determines the angle between α i and α j, and their proportion of lengths. In particular, the Cartan matrix is determined. Together with Lemma 1.19, it follows that identical Dynkin diagrams define identical Cartan matrices, which define isomorphic root systems. Theorem 1.22. A root system Φ is irreducible if and only if its Dynkin diagram is connected. Example 1.23. Consider Example 1.3b). We have Φ = {±α, ±β, ±(α + β), ±(2α + β)} with base Π = {α, β}. The angle between α and β is 3π/4, and β > α. Table 1 shows that α, β = 1 and β, α = 2. Thus, the associated Cartan matrix and Dynkin diagram are C = ( ) 2 1 2 2 and B 2 : β α. Conversely, from such a diagram we read off that β > α and α, β β, α = 2; Table 1 shows α, β = 1 and β, α = 2, recall that both values must be negative by Corollary
From root systems to Dynkin diagrams 7 1.11. In particular, the angle between α and β is 3π/4, and β = 2 α. Note that we have recovered the Cartan matrix and, using Corollary 1.15, we can recover the root system by constructing the closure of {±α, ±β} under simple reflections; the latter can be translated into an efficient algorithm (for arbitrary Dynkin diagrams). In conclusion, we have seen how a root system is (up to isomorphism) uniquely determined by its Dynkin diagram. 2. Irreducible root systems By Lemma 1.5, it suffices to study irreducible root systems; the associated Dynkin diagrams are classified in the following theorem. Theorem 2.1. The Dynkin diagram of an irreducible root system is either a member of one of the four families A n (n 1), B n (n 2), C n (c 3), D n (n 4) as shown in Table 2, where each diagram has n vertices, or one of the five exceptional diagrams E 6, E 7, E 8, G 2, F 4 as shown in Table 3. Each of the diagrams listed in Tables 2 and 3 occur as the Dynkin diagram of some irreducible root system. A n B n C n D n Table 2. Four infinite families of connected Dynkin diagrams. G 2 F 4 E 6 E 7 E 8 Table 3. Five exceptional Dynkin diagrams. Sketch of Proof. Recall that the Coxeter diagram of an irreducible root system is the (connected) Dynkin diagram with all edges considered as undirected; the first step of the proof is to classify all connected Coxeter diagrams. For this, we consider admissible sets of
8 Heiko Dietrich an Euclidean space V with inner product (, ), that is, a set A = {v 1,..., v n } of linearly independent unit vectors with (v i, v j ) 0 and 4(v i, v j ) 2 {0, 1, 2, 3} if i j. The associated graph Γ A has vertices v 1,..., v n, and d ij = 4(v i, v j ) 2 edges between v i and v j if i j. Every Coxeter diagram is the graph Γ A for some admissible set A. We determine the structure of Γ A for an admissible set A; we assume that Γ A is connected and proceed as follows: a) The number of vertices in Γ A joined by at least one edge is at most A 1: v = v 1 +...+v n 0 satisfies (v, v) = n+2 i<j (v i, v j ) > 0, and so n > i<j 2(v i, v j ) = i<j dij N, where N is the number of pairs {v i, v j } such that d ij 1. b) The graph Γ A contains no cycles: The vectors in a cycle of Γ A form an admissible set A which a contradicts a). c) No vertex in Γ A has more than four edges: Let w be a vertex of Γ A with adjacent vertices w 1,..., w k. Since there are no cycles, (w i, w j ) = 0 for i j. Let U = Span R (w 1,..., w k, w), and extend {w 1,..., w k } to an orthonormal basis of U, say by adjoining w 0. Clearly, (w, w 0 ) 0 and w = k i=0 (w, w i)w i. By assumption, w is a unit vector, so 1 = (w, w) = k i=0 (w, w i) 2. Since (w, w 0 ) 2 > 0, this shows that k i=1 (w, w i) 2 < 1. Now, as A is admissible and (w, w i ) 0, we know that (w, w i ) 2 1/4 for 1 i k; hence k 3. d) If Γ A has a triple edge, then Γ A is the Coxeter graph of type G 2 : This follows from c) and the fact that Γ A is assumed to be connected. e) If Γ A has a subgraph which is a line along w 1,..., w k with single edges between w i and w i+1 ; let A = (A \ {w 1,..., w k }) {w} where w = w 1 +... + w k. Then A is admissible and the graph Γ A is obtained from Γ A by shrinking the line to a single vertex: Clearly, A is linearly independent, so we need only to verify the conditions on the inner products. By assumption, 2(w i, w i+1 ) = 1 for 1 i k 1 and (w i, w j ) = 0 for i j otherwise, thus (w, w) = k + 2 k 1 i=1 (w i, w i+1 ) = k (k 1) = 1. Suppose v A and v w i for 1 i k; since there are no cycles, v is joint to at most one w i. Thus, either (v, w) = 0 or (v, w) = (v, w i ), and then 4(v, w) {0, 1, 2, 3}, so A is an admissible set; also Γ A is determined. f) A branch vertex is a vertex which is adjacent to three or more edges; by c), a branch vertex is adjacent to exactly three edges. The graph Γ A has no more than one double edge, not both a double edge and a branch vertex, and no more than one branch vertex: If Γ A contains two or more double edges, then it has a subgraph which is a line along w 1,..., w k with single edges between w 2,..., w k 1, and double edges between w 1 and w 2, and w k 1 and w k. By e), we obtain an admissible set {w 1, v, w k } with each two edges between w 1 and v, and v and w k, contradicting c). The other two parts of the claim are proved in a similar way. g) If Γ A has a subgraph which is a line along w 1,..., w k, then (w, w) = k(k + 1)/2 where w = w 1 + 2w 2 +... + kw k : The shape of the subgraph implies 2(w i, w i+1 ) = 1 for 1 i k 1, and (w i, w j ) = 0 for i j otherwise; the claim follows.
From root systems to Dynkin diagrams 9 h) If Γ A has a double edge, then Γ A is a Coxeter graph of type B n or F 4 : Such a Γ A is a line along w 1,..., w p, u q, u q 1,..., u 1 with single edges and one double edge between w p and u q. By g), (w, w) = p(p + 1)/2 and (u, u) = q(q + 1)/2 for w = p i=1 iw i and u = q i=1 iu i. From the graph, 4(w p, u q ) 2 = 2 and (w i, u j ) = 0 otherwise, hence (w, u) 2 = (pw p, qu q ) 2 = p 2 q 2 /2. As w and u are linearly independent, the Cauchy- Schwarz inequality implies (w, u) 2 < (w, w)(u, u), which yields 2pq < (p + 1)(q + 1), hence (p 1)(q 1) = pq q p < 2. So either q = 1 or p = q = 2. i) If Γ A has a branch vertex, then Γ A is of type D n for some n 4, or E 6, E 7, or E 8 : Such a graph consists of three lines v 1,..., v p, z and x 1,..., x r, z and w 1,..., w q, z, connected at the branch vertex z; we can assume p q r. We have to show that either q = r = 1 or q = 2, r = 1, and p 4. Let v = p i=1 iv i, w = q i=1 iw i, and x = r i=1 ix i. Note that x, w, x are pairwise orthogonal and U = Span R (v, w, x, z) has orthonormal basis {ˆx, ˆv, ŵ, z 0 } for a suitable z 0, where û = u/ u. Write z = (z, ˆv)ˆv + (z, ŵ)ŵ + (z, ˆx)ˆx + (z, z 0 )z 0 ; as z is a unit vector and (z, z 0 ) 0, we get (z, ˆv) 2 + (z, ŵ) 2 + (z, ˆx) 2 < 1. By g), the lengths of v, w, and x are known. Also, (z, v) 2 = (z, pv p ) 2 = p 2 /4, and similarly (z, w) 2 = q 2 /4 and (z, x) 2 = r 2 /4. Substituting these in the previous inequality gives 2p 2 4p(p + q) + 2q 2 4q(q + 1) + 2r 2 4r(r + 1) < 1. This is equivalent to (p + 1) 1 + (q + 1) 1 + (r + 1) 1 > 1. Since (p + 1) 1 (q + 1) 1 (r + 1) 1 1/2, we have 1 < 3/(r + 1), and hence r < 2, so r = 1. Repeating this argument gives q < 3, so q = 1 or q = 2. If q = 2, then p < 5. On the other hand, if q = 1, then there is no restriction on p. We have proved that the Coxeter diagram of an irreducible root system is a Coxeter diagram of type A n, B n, C n, D n, E 6, E 7, E 8, F 4, or G 2 ; this proves that every connected Dynkin diagram occurs in Tables 2 and 3. That every diagram in Tables 2 and 3 is indeed a Dynkin diagram of some root system follows from a direct construction. We omit the proof here; see Section 3 for more details. 3. Root systems of Lie algebras In this section we give a very very brief description of how the finite dimensional simple Lie algebras over C can be classified by Dynkin diagrams. A Lie algebra over the complex numbers is a C-vector space g together with a multiplication (Lie bracket) [, ]: g g, (g, h) [g, h], which is bilinear and for all g, h, k g satisfies [g, g] = 0 and the Jacobi identity [g, [h, k]] + [h, [k, g]] + [k, [g, h]] = 0. The Lie algebra g is simple if its only ideals are {0} and g. Every g g acts on g via the linear transformation ad(g): g g, h [g, h]; call g g semisimple if ad(g) is diagonalisable. A subalgebra h g is a Cartan subalgebra if it is abelian, consists of semisimple elements of g, and is maximal with these properties; up to conjugacy, g has a unique Cartan subalgebra.
10 Heiko Dietrich Let h g be a Cartan subalgebra of a finite dimensional Lie algebra g over the complex numbers. Since h consists of pairwise commuting diagonalisable endomorphisms of g, there exists a C-basis of g such that, with respect to this basis, every ad(h), h h, is a diagonal matrix. Denote by h the dual space of h, that is, the space of linear maps h C. The root space decomposition of g with respect to h is g = h α h g α where g α = {x g h h: [h, x] = α(h)x}. Let Φ h be the set of α h with g α {0}, thus, g = h α Φ g α. Each such g α is 1-dimensional, spanned by a common eigenvector for each ad(h), h h. It turns out the V = Span R (Φ) can be furnished with an inner product (, ) such that Φ is a root system of V. (We omit the technical details here 1 ; proving that Φ satisfies the axioms of a root system is technical and requires significant effort.) This root system is irreducible if and only if g is simple. Moreover, there is a one-to-one correspondence between the isomorphism types of finite dimensional simple Lie algebras over the complex numbers and the isomorphism types of irreducible root systems. Thus, such Lie algebras can be classified up to isomorphism by the different types of connected Dynkin diagrams. In particular, it turns out that for each of the Dynkin diagram in Tables 2 and 3 there exists a Lie algebra whose root system has this Dynkin diagram. This result completes the proof if Theorem 2.1. The Dynkin diagrams of type A n, B n, C n, D n correspond to the classical Lie algebras sl n+1 (C), so 2n+1 (C), sp 2n (C), so 2l (C). 4. More general: Coxeter groups A Coxeter group is a group generated by finitely many involutions (elements of order 2), satisfying specific relations. More precisely, a Coxeter group is a group satisfying a presentation w 1,..., w k (w i w j ) n ij = 1 where n ii = 1 for all i and n ji = n ij 2 if i j. The corresponding Coxeter matrix is the symmetric k k matrix with integer entries n ij, 1 i, j k. The associated Coxeter diagram has vertices w 1,..., w k and, if n ij 3, then there are n ij edges between w i and w j. If n ij 4, then this edge is labelled n ij. Finite Coxeter groups can, up to isomorphism, be classified by their Coxeter diagrams; the list of possible Coxeter diagrams contains the Coxeter diagrams of the Dynkin diagrams in Tables 2 and 3. The Weyl group of a root system is a so-called reflection group (a group generated by hyperplane reflections of an Euclidean space), which is a special type of Coxeter group. 1 The Killing form of g is defined by κ(g, h) = tr(ad g (g) ad g (h)) where g, h g; it is a bilinear symmetric form, and non-degenerate if and only if g is semisimple. Also the restriction of κ to h h is non-degenerate, hence it defines an isomorphism ϕ: h h, h κ(h, ). For α Φ h let t α h with κ(t α, ) = α( ). Now, if α, β Φ, then (α, β) = κ(t α, t β ) = α(t β ) Q defines a real-valued inner product on V = Span R (Φ); note that if x β g β, then ad g (t θ )(x β ) = β(t θ )x β, which implies that (θ, θ) = κ(t θ, t θ ) = β Φ β(t θ) 2 = β Φ (β, θ)2 0 is real. If (θ, θ) = 0, then β(t θ ) = 0 for all roots β, hence t θ = 0 and θ = 0.