My name is Hanzhang Yin, and I have developed this website as a resource to facilitate the review of key concepts in abstract algebra, and it is not fully completed. Feel free to email me at hanyin@ku.edu if there are any errors or suggestions for improvement.
Example. Let \( V = \mathbb{R}^3 \) with the standard basis \( B = \{(1, 0, 0), (0, 1, 0), (0, 0, 1)\} \) and let \( W = \mathbb{R}^2 \) with the standard basis \( \mathcal{E} = \{(1, 0), (0, 1)\} \). Let \( \varphi \) be the linear transformation \( \varphi(x, y, z) = (x + 2y, x + y + z) \). Since \( \varphi(1, 0, 0) = (1, 1) \), \( \varphi(0, 1, 0) = (2, 1) \), \( \varphi(0, 0, 1) = (0, 1) \), the matrix \( A = M_{\mathcal{E}, B}(\varphi) \) is the matrix \[ \begin{pmatrix} 1 & 2 & 0 \\ 1 & 1 & 1 \end{pmatrix}. \]
Definition 5.16 Let \( V \) be a finite-dimensional vector space over a field \( \mathbb{F} \). The dual space of \( V \), denoted by \( V' \), is \( \mathcal{L}(V, \mathbb{F}) \), that is, the vector space of all linear transformations from \( V \) to \( \mathbb{F} \), the latter regarded as a vector space of dimension one. Elements of \( V' \) are called linear functions.
Definition Let \( A \) be a \( K \times K \) matrix. Let \( \lambda_k \) be one of the eigenvalues of \( A \) and denote its associated eigenspace by \( E_k \). The dimension of \( E_k \) is called the geometric multiplicity of the eigenvalue \( \lambda_k \).
Remark. For each eigenvalue \(\lambda\), compute \(\ker(A − \lambda \cdot I)\). This is the \(\lambda\)-eigenspace, the vectors in the \(\lambda\)-eigenspace are the \(\lambda\)-eigenvectors.
Remark. Hence, we can know the geometric multiplicity of an eigenvalue is \(\dim(\ker(A - \lambda \cdot I)) \).
The algebraic multiplicity of the eigenvalue is its multiplicity as a root of the characteristic polynomial.
Proposition Let \( A \) be a \( K \times K \) matrix. Let \( \lambda_k \) be one of the eigenvalues of \( A \). Then, the geometric multiplicity of \( \lambda_k \) is less than or equal to its algebraic multiplicity.
Definition. Suppose \( V \) is finite-dimensional and \( T \in \mathcal{L}(V) \). Then the minimal polynomial of \( T \) is the unique monic polynomial \( p \in \mathcal{P}(F) \) of smallest degree such that \( p(T) = 0 \).
"Linear Algebra Done Right (Forth Edition)", Page 145
Definition (Order Ideal). Let \( T \in \mathcal{L}(V, V) \) and \( v \in V \). The order ideal of \( v \) with respect to \( T \), denoted by \(\text{Ann}(T, v)\), we mean the set of all polynomials \( f(x) \) such that \( v \in \text{Ker}(f(T)) \), that is, \( f(T)(v) = 0 \): \[ \text{Ann}(T, v) = \{ f(x) \in \mathbb{F}[x] \mid f(T)(v) = 0 \}. \]
"Advanced Linear Algebra (2nd Edition)", Page 106
Note that \(\text{Ann}(T, v)\) contains a monic polynomial \(\mu(x)\) such that every polynomial in \(\text{Ann}(T, v)\) is a multiple of \(\mu(x)\). Recall such a polynomial is called a generator of \(\text{Ann}(T, v)\).
"Advanced Linear Algebra (2nd Edition)", Page 107
Definition(Minimal polynomial of \( T \) with respect to \( v \)). Let \( V \) be a finite-dimensional vector space, \( T \) an operator on \( V \), and \( v \) a vector in \( V \). The unique monic generator of \(\text{Ann}(T, v)\) is called the minimal polynomial of \( T \) with respect to \( v \). It is also sometimes referred to as the order of \( v \) with respect to \( T \). It is denoted here by \( \mu_{T, v}(x) \).
"Advanced Linear Algebra (2nd Edition)", Page 107
Remark. Suppose \( g(x) \in \mathbb{F}[x] \) and \( g(T)(v) = 0 \). Then \( \mu_{T,v}(x) \) divides \( g(x) \).
"Advanced Linear Algebra (2nd Edition)", Page 107
Definition (\(\langle T, v\rangle\)). Let \( V \) be a finite-dimensional vector space, \( T \) an operator on \( V \), and \( v \) a vector from \( V \). Then the \( T \)-cyclic subspace generated by \( v \) is \(\{ f(T)(v) \mid f(x) \in \mathbb{F}[x] \} \). We will denote this by \( \langle T, v \rangle \). By the order of the \( T \)-cyclic subspace \( \langle T, v \rangle \) generated by \( v \), we will mean the polynomial \( \mu_{T,v}(x) \).
"Advanced Linear Algebra (2nd Edition)", Page 109
Definition. Let \( V \) be a finite-dimensional vector space, \( T \) an operator on \( V \). Then the annihilator ideal of \( T \) on \( V \), denoted by \(\text{Ann}(T, V)\) or just \(\text{Ann}(T)\), consists of all polynomials \( f(x) \) such that \( f(T) \) is the zero operator: \[ \text{Ann}(T) = \{ f(x) \in \mathbb{F}[x] \mid f(T)(v) = 0, \forall v \in V \} \]
"Advanced Linear Algebra (2nd Edition)", Page 111
Definition (Minimal Polynomial). Let \( V \) be a finite-dimensional vector space and \( T \) a linear operator on \( V \). The unique monic polynomial of least degree in \(\text{Ann}(T,V)\) is called the minimal polynomial of \( T \). This polynomial is denoted by \(\mu_T(x)\).
"Advanced Linear Algebra (2nd Edition)", Page 111
Theorem. Let \( E/F \) be a field extension.
"POTENTIAL DIAGONALIZABILITY", Keith Conrad
Theorem. Let \( A: V \rightarrow V \) be a linear operator. Then \( A \) is diagonalizable if and only if its minimal polynomial in \( F[T] \) splits in \( F[T] \) and has distinct roots.
"THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS," Keith Conrad
Definition. Let \( V \) be a finite-dimensional vector space, \( T \) an operator on \( V \), and \( U \) a \( T \)-invariant subspace. By a \( T \)-complement to \( U \) in \( V \) we shall mean a \( T \)-invariant subspace \( W \) such that \( V = U \oplus W \).
"Advanced Linear Algebra (2nd Edition)", Page 123
Definition. Let \( V \) be a finite-dimensional vector space and \( T \) an operator on \( V \). \( T \) is said to be a cyclic operator if there is a vector \( v \in V \) such that \( V = \langle T, v \rangle \).
"Advanced Linear Algebra (2nd Edition)", Page 114
Definition. Let \( V \) be a finite-dimensional vector space and \( T \) an operator on \( V \). \( T \) is said to be an indecomposable operator if no non-trivial \( T \)-invariant subspace has a \( T \)-invariant complement. In the contrary situation, where there exists non-trivial \( T \)-invariant subspaces \( U \) and \( W \) such that \( V = U \oplus W \), we say \( T \) is decomposable.
"Advanced Linear Algebra (2nd Edition)", Page 123
Definition. Let \( V \) be a non-zero finite-dimensional vector space and \( T \) an operator on \( V \). \( T \) is said to be an irreducible operator if the only \( T \)-invariant subspaces are \( V \) and \( \{0\} \).
"Advanced Linear Algebra (2nd Edition)", Page 124
Theorem. Let \( V \) be an \( n \)-dimensional vector space and \( T \) an operator on \( V \). Then \( T \) is irreducible if and only if \( T \) is cyclic and \( \mu_T(x) \) is an irreducible polynomial.
"Advanced Linear Algebra (2nd Edition)", Page 124
Theorem. Let \( V \) be a finite-dimensional vector space and \( T \) an operator on \( V \). Assume \( T \) is cyclic and \( \mu_T(x) = p(x)^m \), where \( p(x) \) is an irreducible polynomial and \( m \) is a natural number. Then \( T \) is indecomposable.
"Advanced Linear Algebra (2nd Edition)", Page 125
Theorem. Let \( V \) be a finite-dimensional vector space, \( T \) an operator on \( V \), and assume the minimal polynomial of \( T \) is \( \mu_T(x) = p_1(x)^{e_1} \cdots p_t(x)^{e_t} \), where the polynomials \( p_i(x) \) are irreducible and distinct. For each \( i \), let \[ V_i = V(p_i) = \{ v \in V \mid p_i(T)^{e_i}(v) = 0 \} = \text{Ker}(p_i(T)^{e_i}). \] Then each of the spaces \( V_i \) is \( T \)-invariant and \[ V = V_1 \oplus V_2 \oplus \cdots \oplus V_t. \]
"Advanced Linear Algebra (2nd Edition)", Page 132
Theorem. Let \( V \) be a finite-dimensional vector space and \( T \) a linear operator on \( V \) with minimal polynomial \( \mu_T(x) \). Let \( v \) be a vector such that \( \mu_{T,v}(x) = \mu_T(x) \). Then \( \langle T, v \rangle \) has a \( T \)-invariant complement in \( V \).
"Advanced Linear Algebra (2nd Edition)", Page 134
Theorem. Let \( V \) be a finite-dimensional vector space and \( T \) a linear operator on \( V \). Then there are vectors \( \vec{w}_1, \vec{w}_2, \ldots, \vec{w}_r \) such that the following hold:
i. \( V = \langle T, \mathbf{w}_1 \rangle \oplus \cdots \oplus \langle T, \mathbf{w}_r \rangle \).
ii. If \( d_i(x) = \mu_{T, \mathbf{w}_i}(x) \) then \( d_r(x) \mid d_{r-1}(x) \mid \cdots \mid d_1(x) = \mu_T(x) \).
"Advanced Linear Algebra (2nd Edition)", Page 134
Definition. The polynomials \( d_1(x), d_2(x), \ldots, d_r(x) \) are called the invariant factors of \( T \).
"Advanced Linear Algebra (2nd Edition)", Page 135
Definition. Let \( V \) be an \( n \)-dimensional vector space and \( T \) be a linear operator on \( V \). The polynomial (of degree \( n \)) obtained by multiplying the invariant factors of \( T \) is called the characteristic polynomial of \( T \). It is denoted by \( \chi_T(x) \).
"Advanced Linear Algebra (2nd Edition)", Page 135
Theorem. Any eigenvalue of a linear operator is a root of its minimal polynomial in \( F[T] \), so the minimal polynomial and characteristic polynomial have the same roots.
"THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS", Keith Conrad
Theorem. Let \( V \) be a finite-dimensional vector space and \( T \) a linear operator on \( V \). Then \( T \) is diagonalizable if and only if \( T \) is completely reducible and \( \mu_T(x) \) factors into linear factors.
"Advanced Linear Algebra (2nd Edition)", Page 136
Theorem. A matrix in \( M_n(F) \) is diagonalizable over some extension field of \( F \) if and only if its minimal polynomial in \( F[T] \) is separable.
Proposition. Let \( K_A \) be the column space of an \( m \times n \) matrix \( A \) and let \( B = PAQ \) where \( P \) and \( Q \) are invertible matrices. Then \( K_A \cong K_B \). Moreover, \(\mathbb{Z}^m / K_A \cong \mathbb{Z}^m / K_B \).
Proof. Given that \(K_A\) is the column space of an \(m \times n\) matrix \(A\) over \(\mathbb{Z}\). If \(y\in K_A\), then we can know that there exists \(x\in \mathbb{Z}^n\) such that \(y = Ax\). Given that \(Q\) is an invertible matrix, we can know that \(y = Ax = A(Q^{-1}Q)x = (AQ)(Q^{-1}x)\), where \(Q^{-1}x\in \mathbb{Z}^n\). Hence, we can know that \(y\in K_{AQ}\). Thus, we have \(K_A \subseteq K_{AQ}\). For the other direction, we can know that if \(y\in K_{AQ}\), there exists \(x\in \mathbb{Z}^n\) such that \(y = AQx\). Since \(Qx\in \mathbb{Z}^n\), we can know that \(y = AQx = A(Qx)\), which implies that \(y\in K_A\) and \(K_{AQ} \subseteq K_A\). Thus, we can know that \(K_A = K_{AQ}\). Now, we want to show that \(K_A \cong K_{PA}\). We define a map \( \phi: K_A \to K_{PA} \) by \( \phi(x) = Px \). Since \(P\) is invertible, we can know that \( \phi \) is a bijection. Hence, we can know that \( K_A \cong K_{PA} \). Thus, we can know that \(K_A\cong K_{PA} = K_{PAQ} = K_B\). Given that \(K_A \cong K_B\), we can know that \(\mathbb{Z}^m/K_A \cong \mathbb{Z}^m/K_B\). \(\blacksquare\)
$\textbf{Definition. }$ Let \(G\) be a group and \(H\) be a subgroup of \(G\). We say that \(H\) is a \(\textit{normal subgroup}\) of \(G\) if \(gH = Hg\) for all \(g\in G\).
\(\textbf{Proposition. }\) Let \(\phi : G_1 \to G_2\) be a group homomorphism, with kernel \(K\). Then \(K\) is a normal subgroup of \(G_1\). Conversely, any normal subgroup of \(G_1\) is the kernel of a group homomorphism whose domain is \(G_1\). Thus, normal subgroups are exactly kernels of group homomorphisms.
\(\textbf{Theorem. }\)Let \(\phi : G_1 \to G_2\) be a surjective group homomorphism with kernel \(K\). Then there is a one-to-one, and onto correspondence between the subgroups of \(G_1\) containing \(K\) and the subgroups of \(G_2\) given by \(H \mapsto \phi(H)\), for \(H \subseteq G_1\) containing \(K\), and \(L \mapsto \phi^{-1}(L)\), for \(L \subseteq G_2\). Under this correspondence, \(\phi(H)\) is normal in \(G_2\), if \(H\) is normal in \(G_1\) and \(\phi^{-1}(L)\) is normal in \(G_1\), if \(L\) is normal in \(G_2\).
\(\textbf{Corollary. }\) Let \( G \) be a group and \( N \) a normal subgroup. Then there is a one-to-one, onto correspondence between the subgroups of \( G \) containing \( N \) and the subgroups of \( G/N \). Under this correspondence, the normal subgroups of \( G \) containing \( N \) correspond to the normal subgroups of \( G/N \).
\(\textbf{Theorem. }\) Let \(G\) be a group and \(H \subseteq G\) a subgroup. Assume that \([G : H]\) is the smallest prime dividing the order of \(|G|\). Then \(H\) is normal in \(G\).
\(\textbf{Theorem. }\) Let \(G\) be a group and \(H \subset G\) a subgroup. Assume that \([G : H]\) is the smallest prime dividing the order of \(|G|\). Then \(H\) is normal in \(G\).
\(\textbf{LaGrange's Theorem.}\) Let \(G\) be a finite group and \(H \subseteq G\) a subgroup. Then \[ |G| = |H| \cdot (\text{number of distinct left cosets of } H)\\ = |H| \cdot (\text{number of distinct right cosets of } H). \]
\(\textbf{Definition. }\) Let \(G_1\) and \(G_2\) be groups. A bijective function \(\phi: G_1\to G_2\) is called an isomorphism if for all \(a, b\in G_1\), we have \[ \phi(ab) = \phi(a)\phi(b). \]
\(\textbf{First Isomorphism Theorem. }\)Let \(\phi : G_1 \to G_2\) be a surjective group homomorphism with kernel \(K\). Then \(G_1/K \cong G_2\).
\(\textbf{Second Isomorphism Theorem. }\) Let \(K \subseteq N \subseteq G\) be groups such that \(K\) and \(N\) are normal in \(G\). Then \(N/K\) is a normal subgroup of \(G/K\) and \((G/K)/(N/K) \cong G/N\).
Now, we try to define some subgroups in SageMath:
For example: \(S_3, \mathbb{Z}_2\times \mathbb{Z}_2\times \mathbb{Z}_2\).
# Define the group S3 using AbelianGroup
sage: S3 = SymmetricGroup(3)
# Define the group Z2 x Z2 x Z2 using AbelianGroup
sage: Z2xZ2xZ2 = AbelianGroup([2, 2, 2])
\(\textbf{Definition. }\)Let \(G\) be a group. The set of all elements of \(G\) that commute with every element of \(G\) such that \[ Z(G) = \{g\in G\mid gx=xg\text{ for all }x\in G\}, \] is called the center of \(G\).
\(\textbf{Definition. }\)Let \(A\) be any subset of a group \(G\). A subset of a\(G\) such that \[ C_G(A) = \{g\in G\mid gag^{-1}=a\text{ for all }a\in A\}, \] is called the centralizer of \(A\) in \(G\).
\(\textbf{Proposition. }\)Let \(A\) be any subset of a group \(G\). Then \(C_G(A)\) is a subgroup of \(G\).
\(\textbf{Proof. }\)We will prove it with subgroup criterion. Let \(x, y\in C_G(A)\) and \(a\in A\). Then we have \[ yay^{-1}=a\in A. \] Hence, \[ \begin{align} y^{-1}yay^{-1}y &= y^{-1}ay = a\in A. \end{align} \] Thus, we have \(y^{-1}\in C_G(A)\). Moreover, we have \[ \begin{align} (xy^{-1})a(xy^{-1})^{-1} &= x(y^{-1}ay)x^{-1}\\ &= xax^{-1} \\ &= a\in A. \end{align} \] Hence, we have \(xy^{-1}\in C_G(A)\). Therefore, \(C_G(A)\) is a subgroup of \(G\). \[ \tag*{\(\square\)} \]
\(\textbf{Definition. }\) Let \(A\) be any set of a group \(G\). A subset of \(G\) such that \[ N_G(A) = \{g\in G\mid gAg^{-1}=A\}, \] is called the normalizer of \(A\) in \(G\).
\(\textbf{Definition. }\)Let \(G\) be a group and \(x\) be an element. A subset of \(G\) such that \[ G_x = \{g\in G\mid gx=x\}, \] is called the stabilizer of \(x\) in \(G\).
\(\textbf{Definition. }\)Let \(G\) be a group and \(X\) be a set. A subset of \(G\) such that \[ G_X = \{g\in G\mid gx=x\text{ for all }x\in X\}, \] is called the stabilizer of \(X\) in \(G\).
\(\textbf{Definition. }\) Let \(G\) be a group acting on a set \(X\). The orbit of an element \(x\in X\) is the set \[ Gx = \{gx\mid g\in G\}. \]
\(\textbf{Lemma.}\)Let \(G\) be a group of order \(p^t\), with \(p\) prime, and assume \(G\) acts on the finite set \(X\). If \(r\) denotes the number of orbits with just one element, then \(|X| \equiv r \pmod{p}\).
\(\textbf{Proposition.}\) Assume the group \( G \) acts on the set \( X \). Fix \( x \in X \). Then there is a 1-1, onto set map between \(\text{orb}(x)\) and the set of distinct left cosets of \( G_x \) given by \( g \cdot x \mapsto gG_x \). In particular, if \(\text{orb}(x)\) or \([G : G_x]\) is finite, then \(\left|\text{orb}(x)\right| = [G : G_x]\), and it follows that \(\left|\text{orb}(x)\right|\) divides \(\left|G\right|\), if \( G \) is finite.
Orbit Stabilizer Theorem. Let \( G \) be a group which acts on a finite set \( X \). Let \( x \in X \). Let \( \text{Orb}(x) \) denote the orbit of \( x \). Let \( \text{Stab}(x) \) denote the stabilizer of \( x \) by \( G \). Let \( [G : \text{Stab}(x)] \) denote the index of \( \text{Stab}(x) \) in \( G \). Then: \[ |\text{Orb}(x)| = [G : \text{Stab}(x)] = \frac{|G|}{|\text{Stab}(x)|} \]
\(\textbf{Class Equation. }\) Let \( G \) be a finite group. Then: \[ |G| = |Z(G)| + \sum_{i=1}^r |c(x_i)| = |Z(G)| + \sum_{i=1}^r [G : C_G(x_i)], \] where the sum is taken over the distinct conjugacy classes with more than one element. Here \( Z(G) \) denotes the center of \( G \), where \( Z(G) := \{g \in G \mid gx = xg, \text{ for all } x \in G\} \).
\(\textbf{Definition(\(p\)-group). }\)A group \(G\) is called a \(p\)-group if \(|G|=p^n\) (order of \(G\) is \(p^n\)) for some prime \(p\) and some integer \(n\geq 0\).
\(\textbf{Definition(\(p\)-subgroup). }\)A subgroup \(H\) of a group \(G\) is called a \(p\)-subgroup if \(|H|=p^n\) for some prime \(p\) and some integer \(n\geq 0\).
\(\textbf{Definition. }\)Let \(G\) be a group and \(p\) be a prime. If \(G\) is a group of order \(p^nm\) where \(p\not\mid m\), then a subgroup \(H\) of \(G\) such that \(|H|=p^n\) is called a Sylow \(p\)-subgroup of \(G\).
\(\textbf{Definition. }\)Let \(G\) be a group and \(p\) be a prime. The set of all Sylow \(p\)-subgroups of \(G\) is denoted by \(Syl_p(G)\).
\(\textbf{Definition. }\)Let \(G\) be a group and \(p\) be a prime. The number of Sylow \(p\)-subgroups of \(G\) is denoted by \(n_p(G)\).
\(\textbf{Sylow's Theorem. }\) Let \( G \) be a group of order \( p^{\alpha} m \), where \( p \) is a prime not dividing \( m \).
Proposition. Let \( H \) and \( K \) be subgroups of the group \( G \). The number of distinct ways of writing each element of the set \( HK \) in the form \( hk \), for some \( h \in H \) and \( k \in K \), is \( |H \cap K| \). In particular, if \( H \cap K = 1 \), then each element of \( HK \) can be written uniquely as a product \( hk \), for some \( h \in H \) and \( k \in K \).
Theorem. Suppose \( G \) is a group with subgroups \( H \) and \( K \) such that
Then we have \(HK \cong H\times K\).
"Abstract Algebra", Dummit & Foote, Third Edition, Page 171
\(\textbf{Definition. }\) A group \(G\) is \(\textit{simple}\) if \(G\) is nontrivial and the only normal subgroups of \(G\) are \(\{e\}\) and \(G\).
\(\textbf{Theorem. }\) Let \(G\) be a simple group of order \(60\). Then \(G\) is isomorphic to \(A_5\).
\(\textbf{Theorem. }\) The alternating group \(A_n\) is a simple group for \(n \geq 5\). In other words, there are no proper normal subgroups of \(A_n\), for \(n \geq 5\).
Definition. A finite group \( G \) is solvable if there are subgroups \[ \{e\} = G_n \subseteq G_{n-1} \subseteq \cdots \subseteq G_1 \subseteq G_0 = G \] such that for \( i = 1, \ldots, n \) we have:
\(\textbf{Theorem. }\) Let \( G \) be a finite group with \( |G| = p^n \), with \( p \) prime and \( n \geq 1 \). Then:
Theorem. Let \(G\) be a group and \(H\) a normal subgroup. Then \(G\) is solvable if and only if \(H\) and \(G/H\) are solvable.
Proof. Firstly, we want to show that if \(G\) is solvable, then we have \(H\) is solvable, and \(G/H\) is solvable. Given that \(G\) is solvable and \(H\) is normal. We can know that \[ \{e\} = G_0\subset G_1\subset \dots G_r = G, \] where \(G_i\) is normal in \(G_{i+1}\) and \(G_{i+1}/G_i\) is abelian. Now, we define \(H_{i} = H\cap G_i\). Suppose that \(h'\in H_i\), we can know that for any \(h\in H_{i+1}\), we have \(h\cdot h'\cdot h^{-1}\in H\) since \(h, h', h^{-1}\in H\). At the same time, we can know that \(h\cdot h'\cdot h^{-1}\in G_i\), since \(h'\in G_i\) and \(G_i\) is normal in \(G_{i+1}\). Thus, we showed that \(h\cdot h'\cdot h^{-1}\in H\cap G_i = H_i\). Hence, we can know that \(H_i\) is normal in \(H_{i+1}\). Now, we define a map \(\varphi: H_{i+1}/H_i\to G_{i+1}/G_i\) such as \[ \varphi(g\cdot H_i) = g\cdot G_i. \] Firstly, we want to show that the map is well-defined. Suppose that \(g\cdot H_i = h\cdot H_i\). Hence, we have \(h^{-1}\cdot g\in H_i\), which implies that \(h^{-1}g\in G_i\) for \(H_i\subset G_i\). Thus, we can know that \(h^{-1}g\cdot G_i \subset G_i\). Hence, for any \(g'\in G_i\) we have \[ h^{-1}g\cdot g'\in G_i, \] which implies that \[ g\cdot g'\in h\cdot G_i, \text{ for any }g'\in G_i. \] Thus, we have \(g\cdot G_i\subset h\cdot G_i\). Similarly, we can get \(g^{-1}h\in H_i\) from \(g\cdot H_i = h\cdot H_i\). Then, it shows that \(g^{-1}h\in G_i\), which implies that \(h\cdot G_i\subset g\cdot G_i\). Hence, we have \(g\cdot G_i = h\cdot G_i\) (i.e. \(\varphi(g\cdot H_i) = \varphi(h\cdot H_i)\)), and we showed the map is well-defined. Next, we need to show that \(\varphi\) is a homomorphism. Let \(g, h\in H_{i+1}\), we have \[ \begin{align} \varphi(g\cdot H_i)\cdot \varphi(h\cdot H_i) &= g\cdot G_i\cdot h\cdot G_i = g\cdot hG_i, \\ \varphi(gH_i\cdot hH_i) &= \varphi(g\cdot hH_i) = g\cdot hG_i. \end{align} \] Thus, we have \(\varphi(g\cdot H_i)\cdot \varphi(h\cdot H_i) = \varphi(gH_i\cdot hH_i)\). After that, we want to show that \(\varphi\) is injective. Suppose that \(\varphi(gH_i) = \varphi(hH_i)\) (i.e. \(gG_i = hG_i\)). Hence, we have \(h^{-1}g\in G_i\). Since \(h, g\in H\), we have \(h^{-1}g\in H\), which implies that \(h^{-1}g\in H_i\). Again, we can get \(gH_i\subset hH_i\). By the same process, we can get \(g^{-1}h\in G_i\), which will lead us to \(hH_i\subset gH_i\). Thus, \(gH_i = hH_i\). Hence, \(\varphi\) is injective. It allows us to know that \(\varphi(H_{i+1}/H_i)\) is a subgroup of \(G_{i+1}/G_i\). Since \(G_{i+1}/G_i\) is abelian by assumption, and \(H_{i+1}/H_i\cong \varphi(H_{i+1}/H_i)\subset G_{i+1}/G_i\), we can know that \(H_{i+1}/H_i\) is abelian for subgroup of an abelian group is abelian. Thus, we have \[ \{e\} = \{e\}\cap H= H_0\subset H_1\subset \dots H_r = H\cap G, \] where each \(H_i\) is normal in \(H_{i+1}\) and \(H_{i+1}/H_i\) is abelian. Therefore, \(H\) is solvable. Now we need to show that \(G/H\) is solvable. We define \(K_i = G_i/H\) and want to show that \(K_i\) is normal in \(K_{i+1}\). Let \(gH\in K_{i+1}\) and \(g'H\in K_i\). We have \[ (gH)(g'H)(gH)^{-1} = gHg'Hg^{-1}H = g g'g^{-1}H. \] Since \(G_i\) is normal in \(G_{i+1}\), we have \(g g'g^{-1}\in G_i\), which implies that \(g\cdot g'\cdot g^{-1}H \in G_i/H = K_i\). Hence, we show that \(K_i\) is normal in \(K_{i+1}\). Now, we need to show that \(K_{i+1}/K_i\) is abelian. If \(H\subset G_i\subset G_{i+1}\), we can know \(H\) is normal in \(G_i\) since \(H\) is normal in \(G\). By the Third Isomorphism Theorem, we have \[ G_{i+1}/G_i\cong (G_{i+1}/H)/(G_i/H). \] Then, we have \((G_{i+1}/H)/(G_i/H)\) is abelian since \(G_{i+1}/G_i\) is abelian by assumption. (To be continued...) Now, suppose that \(H\) is a normal subgroup of \(G\), and \(H\) and \(G/H\) are solvable. We want to show that \(G\) is solvable. Since \(H\) is solvable, we have \[ \{e\} = H_0\subset H_1\subset \dots H_n = H, \] where each \(H_i\) is normal in \(H_{i+1}\) and \(H_{i+1}/H_i\) is abelian. Since \(G/H\) is solvable, we have \[ H \subset G_1/H\subset \dots \subset G_m/H = G/H, \] where \(H\subset G_i\) for all \(i\). Moreover, we have \(G_i/H\) is normal in \(G_{i+1}/H\) and \((G_{i+1}/H)/(G_i/H)\) is abelian. Since \(G_i/H\) is normal in \(G_{i+1}/H\), let \(g\in G_{i+1}\) and \(g'\in G_i\), we have \[ \begin{align} (gH)(g'H)(gH)^{-1} &= gHg'Hg^{-1}H = g g'g^{-1}H\in G_i/H. \end{align} \] It shows that \(g g'g^{-1}\in G_i\), which implies that \(G_i\) is normal in \(G_{i+1}\). Since \((G_1/H)/H\cong G_1/H\), we have \(G_1/H\) is abelian. Thus, we can have a chain such as \[ \{e\} = H_0\subset H_1\subset \dots H_n = H\subset G_1\subset \dots \subset G_m = G, \] where each \(H_i\) is normal in \(H_{i+1}\) and \(H_{i+1}/H_i\) is abelian, and each \(G_i\) is normal in \(G_{i+1}\) and \(G_{i+1}/G_i\) is abelian. Moreover, \(H\) is normal in \(G_1\) since \(H\) is normal in \(G\), and \(G_1/H\) is abelian. Therefore, we have \(G\) is solvable. \(\blacksquare\)
"Algebra", Page 19
\(\textbf{Definition. }\) A division ring is a ring \(R\) with identity \(1\) such that every nonzero element of \(R\) is a unit (i.e. every nonzero element of \(R\) has a multiplicative inverse).
\(\textbf{Definition. }\) A field is a commutative ring \(R\) with identity \(1\) such that every nonzero element of \(R\) is a unit (i.e. commutative division ring).
\(\textbf{Proposition(1). }\) Let \(R\) be a ring with identity \(1\), and \(I\) is an ideal of \(R\). \(R = I\) if and only if \(I\) contains a unit.
\(\textbf{Proof. }\)Firstly, suppose that \(R = I\). Since \(R\) is a ring with identity \(1\), then we can know that \(1\) is unit in \(R\). Hence, we can see that \(I\) contains a unit. For the other direction, suppose that \(I\) contains a unit \(u\in R\). Then we can know that there exists an inverse \(u^{-1}\in R\) such that \(u^{-1}u = 1\). Hence, we have for any \(r\in R\), \[ r = r(1) = r(u^{-1}u) = (ru^{-1})(u)\in I. \] Therefore, we have \(R\subset I\). Since \(I\subset R\), we have \(R = I\). \(\blacksquare\)
Proposition 5.1 (Correspondence Theorem for Rings). If \( I \) is a proper ideal in a commutative ring \( R \), then there is an inclusion-preserving bijection \(\varphi\) from the set of all ideals \( J \) in \( R \) containing \( I \) to the set of all ideals in \( R/I \), given by \[ \varphi : J \mapsto J/I = \{ a + I \mid a \in J \} \].
\(\textbf{Theorem. }\) Suppose that \(p\) is in a ring \(R\). Then \(p\) is prime if and only if \((p)\) is a prime ideal.
\(\textbf{Proof. }\) We firstly show that if \(p\) is prime then \((p)\) is a prime ideal. Suppose that \(p\) is prime. Then we have \(p\neq 0\) and \(p\) is not a unit. If \(p\mid ab\) where \(a, b\in R\), then we have \(ab = pr\) for some \(r\in R\). In other words, \(ab\in (p)\). Since \(p\) is prime, we have \(p\mid a\), which implies that \(a = pu\) for some \(u\in R\) (i.e. \(a\in (p)\)), or \(p\mid b\), which implies that \(b = pv\) for some \(v\in R\) (i.e. \(b\in (p)\)). Therefore, we have \((p)\) is a prime ideal. For the other direction, we suppose that \((p)\) is a prime ideal. If \(ab\in (p)\) for some \(a, b\in R\), then we have \(ab = pr\) for some \(r\in R\). Since \((p)\) is a prime ideal, we have \(a\in (p)\) or \(b\in (p)\). Thus, we have \(p\mid a\) or \(p\mid b\). Therefore, \(p\) is prime. \(\blacksquare\)
\(\textbf{Corollary. }\) If \(R\) is a field then any nonzero ring homomorphism \(\varphi: R\to S\) is injective.
\(\textbf{Proof. }\) Given \(R\) is a field, we can know that the only ideals of \(R\) are \(\{0\}\) and \(R\). We know that \(\ker(\varphi)\) is an ideal of \(R\). Thus, we only have two options, \(\ker(\varphi) = \{0\}\) or \(\ker(\varphi) = R\). Suppose that \(a\in \ker(\varphi)\) where \(a\neq 0\). Since \(R\) is a field, we have \(a^{-1}\in R\). Then we have \[ 0 = \varphi(a)\cdot \varphi(a^{-1}) = \varphi(aa^{-1}) = \varphi(1) = 1, \] which is a contradiction. Thus, we have \(\ker(\varphi) = \{0\}\). Therefore, \(\varphi\) is injective. \(\blacksquare\)
\(\textbf{Definition. }\) An integral domain is a commutative ring \(R\) with identity \(1\neq 0\) such that \[ ab = 0 \implies a = 0 \text{ or } b = 0. \] (i.e. there is no zero divisors in \(R\)).
\(\textbf{Statements of principal ideals. }\)For an integral domain \( R \):
\(\textbf{Proof (2).}\) The major part is to show that there is no zero divisor in \(\mathbb{Z}[i]\). Suppose that there exists zero divisors in \(\mathbb{Z}[i]\). Suppose \(a + b i, c + d i\in \mathbb{Z}[i]\) such that \((a + b i)(c + d i) = 0\), where \(a\) or \(b\) is not zero and \(c\) or \(d\) is not zero. Then we have \[ \begin{align} (a + bi)(c + di) &= 0\\ ac + adi + bci + bdi^2 &= 0\\ ac - bd + (ad + bc)i &= 0\\ ac - bd &= 0\\ ad + bc &= 0\\ ac &= bd\\ ad &= -bc\\ \end{align} \] Then we have \[ \begin{align} acd & = bd^2\\ acd & = -bc^2\\ bd^2 &= -bc^2\\ d^2 &= -c^2\\ d^2 + c^2 &= 0\\ \end{align} \] Since \(d, c\in \mathbb{Z}\), we have \(d = c = 0\). However, we assume that \(c\) or \(d\) is not zero, which is a contradiction.
\(\textbf{Proof. }\) suppose that \(2 = (a + b\sqrt{-5})(c + d\sqrt{-5})\) for some \(a, b, c, d\in \mathbb{Z}\). Then we have \[ \begin{align} 2 &= (a + b\sqrt{-5})(c + d\sqrt{-5})\\ 2 &= ac - 5bd + (ad + bc)\sqrt{-5}\\ \end{align} \] We can see that \(ad + bc = 0\). Then we have \[ \begin{align} 2 &= ac - 5bd\\ 2 &= ac - 5bd - (ad + bc)\sqrt{-5}\\ 2 &= (a - b\sqrt{-5})(c + d\sqrt{-5})\\ \end{align} \] In that case, we have \[ \begin{align} 4 &= 2\cdot 2\\ 4 &= (a + b\sqrt{-5})(c + d\sqrt{-5})(a - b\sqrt{-5})(c + d\sqrt{-5})\\ 4 &= (a^2 + 5b^2)(c^2 + 5d^2)\\ \end{align} \] Thus, we can see that if \(b\neq 0\), then we have \(a^2 + 5b^2 \geq 5\). Similarly, if \(d\neq 0\), then we have \(c^2 + 5d^2 \geq 5\). Hence, we can see that \(b = d = 0\). Then we have \(2 = ac\). Since \(2\) is a prime number, we have \(a = 1\) and \(c = 2\). Therefore, we have \(2 = (2 + 0\sqrt{-5})(1 + 0\sqrt{-5})\), where one of them is a unit. Then, we have \(2\) is irreducible in \(\mathbb{Z}(\sqrt{-5})\). \( \blacksquare\)
\(\textbf{Proof. }\) To show that \(2\) is not a prime, we only need to come up with an example. Firstly, we can see that \(6 = (1 + 5) = (1 + \sqrt{-5})(1 - \sqrt{-5})\). Then we have \(2\mid 6\). However, we can see that \(2\nmid (1 + \sqrt{-5})\) and \(2\nmid (1 - \sqrt{-5})\). Therefore, we can see that \(2\) is not a prime in \(\mathbb{Z}(-\sqrt{5})\). \(\blacksquare\)
\(\textbf{Proof. }\) To show that \(2\) is not a prime, we only need to come up with an example. Firstly, we can see that \(6 = (1 + 5) = (1 + \sqrt{-5})(1 - \sqrt{-5})\). Then we have \(2\mid 6\). However, we can see that \(2\nmid (1 + \sqrt{-5})\) and \(2\nmid (1 - \sqrt{-5})\). Therefore, we can see that \(2\) is not a prime in \(\mathbb{Z}(-\sqrt{5})\). \(\blacksquare\)
\(\textbf{Proof. }\) To show that \(3\) is not a prime, we come up with an example. Firstly, we can see that \(3\cdot 3 = 9 = (2 + \sqrt{-5})(2 - \sqrt{-5})\). Then we have \(3\mid 9\). However, we can see that \(3\nmid (2 + \sqrt{-5})\) and \(3\nmid (2 - \sqrt{-5})\). Therefore, we can see that \(3\) is not a prime in \(\mathbb{Z}(-\sqrt{5})\). \(\blacksquare\)
\(\textbf{Proof. }\)We prove it with contradiction. Suppose that \(2+\sqrt{-5}\) is a prime. We know that \(3\cdot 3 = 9 = (2 + \sqrt{-5})(2 - \sqrt{-5})\). Then we have \(2+\sqrt{-5}\mid 9\), which implies that \(2 + \sqrt{-5}\mid 3\). We know that \(3\) is irreducible in \(\mathbb{Z}(-\sqrt{5})\). Then \(2 + \sqrt{-5}\) is a unit, which is not true, or \(2 + \sqrt{-5}\) multiplied by a unit will be \(3\). There are only two units in \(\mathbb{Z}(-\sqrt{5})\), which are \(\pm 1\). However, \((2 + \sqrt{-5})\cdot 1 \neq 3\) and \(2 + \sqrt{-5}\cdot (-1) \neq 3\). It is a contradiction. Therefore, we can see that \(2 + \sqrt{-5}\) is not a prime in \(\mathbb{Z}(-\sqrt{5})\). \(\blacksquare\)
\(\textbf{Theorem. }\) Let \(R\) be an integral domain with identity \(1\). Then every prime element of \(R\) is irreducible.
Suppose that \(p\) is a prime and \(p = ab\) for some \(a, b\in R\). Then we have \(p\mid ab\). Since \(p\) is prime, we have \(p\mid a\) or \(p\mid b\). Without loss of generality, we can assume that \(p\mid a\). Then we have \(a = pc\) for some \(c\in R\). Then we have \[ p = a\cdot b = (pc)\cdot b = p(cb). \] Then we have \(p(1 - cb) = 0\). Since \(R\) is an integral domain, we can know that \(1 - cb = 0\), which implies that \(cb = 1\). Therefore, we have \(b\) is a unit. Thus, we have \(p\) is irreducible. \(\blacksquare\)
\(\textbf{Definition.}\) Given a chain of ideals \(I_1 \subset I_2 \subset \dots\), there exists \(n_0\) such that for all \(n\geq n_0\), \(I_{n_0} =I_n\).
\(\textbf{Proposition.}\) The following conditions are equivalent for the commutative ring \( R \):
\(\textbf{Theorem. }\) Let \(R\) be a principal ideal domain, then every prime ideal in \(R\) is maximal.
\(\textbf{Proof. }\)Let \(R\) be a principal ideal domain and \((p)\) be a prime ideal in \(R\). Since \(R\) is a principal ideal domain, we find an ideal containing \((p)\) and assume it is \((m)\) (i.e. \((p)\subset (m)\)). Then we can know that \(p\in (m)\), which implies that \(p = mr\) for some \(r\in R\). Then we can get \(mr\in (p)\). Since \(p\) is a prime ideal, we have \(m\in (p)\) or \(r\in (p)\). If \(m\in (p)\), then we have \((m)\subset (p)\). Since \((p)\subset (m)\), we have \((p) = (m)\). If \(r\in (p)\), then we have \(r = ps\) for some \(s\in R\). Then we have \(p =mr = psm\), which implies that \(1 = sm\). Hence, we have \(m\) is a unit in \(R\). Thus, we have \((m) = R\). Therefore, we can see that \((p)\) is maximal. \(\blacksquare\)
\(\textbf{Proposition. }\) Let \(R\) be a principal ideal domain. Then every irreducible element in \(R\) is prime.
\(\textbf{Proof. }\) Suppose that \(p\) is irreducible in \(R\). If \( M \) is any ideal containing \( \langle p\rangle \) then by hypothesis \( M = \langle m\rangle \) is a principal ideal. Since \( p \in \langle m\rangle \), \( p = rm \) for some \( r \in R\). Since \(p\) is irreducible, either \( r \) or \( m \) is a unit. If \(m\) is a unit, we have \(\langle m\rangle = R\). If \(r\) is a unit, we have \(\langle p\rangle = \langle m\rangle \). Thus the only ideals of \(R\) containing \( \langle p\rangle \) are \( \langle p\rangle \) or \( R \). Thus, we can know that \(\langle p\rangle \) is a maximal ideal. Since maximal ideals are prime ideals, we know that \(p\) is prime. \(\blacksquare\)
\(\textbf{Proposition. }\)Any euclidean domain is a principal ideal domain.
\(\textbf{Note. }\)If \(R\) is a Euclidean Domain, it means that \(R\) has some form of division algorithm.
\(\textbf{Proof. }\) Let \(I\) be an ideal in a Euclidean domain \(R\). We will show that there exists an element \(a \in I\) such that \(I = (a)\). Since the well ordering principle, we can choose \(a \in I\) such that \(a \neq 0\) and \(a\) has the smallest norm among all nonzero elements of \(I\). We claim that \(I = (a)\). Let \(x \in I\). We can write \(x = qa + r\), where \(q, r \in R\) and either \(r = 0\) or \(N(r) \lt N(a)\). Since \(x \in I\) and \(a \in I\), we have \(qa + r \in I\). Therefore, \(r \in I\). If \(r = 0\), then \(x = qa \in (a)\). If \(r \neq 0\), then \(N(r) \lt N(a)\). This contradicts the choice of \(a\). Therefore, \(r = 0\) and \(x = qa \in (a)\). Hence, every element of \(I\) can be written as a multiple of \(a\), so \(I = (a)\). Therefore, every ideal in a Euclidean domain is generated by a single element, and hence a Euclidean domain is a principal ideal domain. \(\blacksquare\)
\(\textbf{Proposition.}\) Let \( R \) be a commutative ring.
\(\textbf{Definition. }\) A unique factorization domain is an integral domain \(R\) such that every nonzero non-unit element of \(R\) can be written as a product of irreducible elements of \(R\) in a unique way up to order and units.
\(\textbf{Example 1. }\) \(\mathbb{Z}\) is a unique factorization domain.
\(\textbf{Example 2. }\) \(\mathbb{Z}[i]\) is a unique factorization domain.
\(\textbf{Example 3. }\) \(\mathbb{Z}[\sqrt{D}]\) is a unique factorization domain.
\(\textbf{Example 4. }\) \(\mathbb{Z}[x]\) is a unique factorization domain.
\(\textbf{Example 5. }\) \(\mathbb{F}[x]\) is a unique factorization domain, where \(\mathbb{F}\) is a field.
\(\textbf{Proposition. }\) Let \( a \) and \( b \) be two nonzero elements of the Unique Factorization Domain \( R \) and suppose \[ a = up_1^{e_1} p_2^{e_2} \cdots p_n^{e_n} \quad \text{and} \quad b = vp_1^{f_1} p_2^{f_2} \cdots p_n^{f_n} \] are prime factorizations for \( a \) and \( b \), where \( u \) and \( v \) are units, the primes \( p_1, p_2, \ldots, p_n \) are distinct and the exponents \( e_i \) and \( f_i \) are \(\geq 0\). Then the element \[ d = p_1^{\min(e_1, f_1)} p_2^{\min(e_2, f_2)} \cdots p_n^{\min(e_n, f_n)} \] (where \( d = 1 \) if all the exponents are 0) is the greatest common divisor of \( a \) and \( b \).
\(\textbf{Remark. }\) A ring satisfying Bezout's identity is called a Bézout domain. Not all unique factorization domains are Bézout domains, but all Bézout domains are unique factorization domains.
\(\textbf{Theorem. }\) Let \(F\) be a field. The polynomial ring \(F[x]\) is a \(\textit{Euclidean Domain}\). Specifically, if \(a(x)\) and \(b(x)\) are two polynomials in \(F[x]\) with \(b(x)\) nonzero, then there are unique \(q(x)\) and \(r(x)\) in \(F[x]\) such that \[ a(x) =q(x)b(x) +r(x)\qquad \text{with}\qquad r(x) = 0 \text{ or } \deg(r(x)) \lt \deg(b(x)). \]
\(\textbf{Theorem. }\) Let \(F\) be a field and \( f \in F[x_1, \ldots, x_n] \) be non-constant. Then we say that \(F[X_1, \ldots, x_n]\) is a Unique Factorization Domain. Specifically, there are irreducible polynomials \( g_1, \ldots, g_r \in F[x_1, \ldots, x_n] \) such that \[ f = g_1 \cdots g_r. \] Furthermore, if there is a second factorization of \( f \) into irreducible polynomials \[ f = h_1 \cdots h_s, \] then \( r = s \) and the \( h_i \)'s can be permuted so that each \( h_i \) is a constant multiple of \( g_i \).
\(\textbf{Theorem. }\) Let \(F\) be a field and \( f, g \in F[x] \). Assume that \( g \) is nonzero. Then there are polynomials \( q, r \in F[x] \) such that \[ f = qg + r, \quad \text{where } r = 0 \text{ or } \deg(r) \lt \deg(g). \] Furthermore, \( q \) and \( r \) are unique.
\(\textbf{Corollary. }\)Let \(F\) be a field. We have \( \alpha \in F \) is a root of a polynomial \( f \in F[x] \) if and only if \( x - \alpha \) is a factor of \( f \) in \( F[x] \)
\(\textbf{Definition. }\)To say that a field \( L \) contains \(\textit{all}\) roots of \( f \) means that \( f \) factors as \[ f = a_0(x - \alpha_1) \cdots (x - \alpha_n), \] where \( \alpha_1, \ldots, \alpha_n \in L \). When this happens, we say that \( f \) splits completely over \( L \).
\(\textbf{Theorem. }\) If \( F \) is a field and \( f \in F[x] \) is non-constant, then the following are equivalent:
\(\textbf{Definition (Primitive). }\) A polynomial \( f \in F[x] \) is \textit{primitive} if the greatest common divisor of its coefficients is \(1\). (i.e. for all prime elements \(p \in R\), \(p \not\mid f(x)\) in \(R[x]\)).
\(\textbf{Proposition.}\) Every nonzero ideal of \(F[x]\) can be written uniquely as \((f)\) where \(f\) is monic.
\(\textbf{Proposition A. }\)For a UFD \(R\), if \(p \in R\) is a prime element, then \(p\) is also a prime element in \(R[x]\).
\(\textbf{Gauss's lemma.}\) Let \(R\) be a UFD. Then the product of primitive polynomials is primitive.
\(\textbf{Proposition B. }\)Suppose \(R\) is a UFD with quotient field \(K\) and \(f(x) \in R[x]\) is primitive. Then \(f(x)\) is irreducible in \(R[x]\) if and only if it is irreducible in \(K[x]\).
\(\textbf{Proposition C. }\)Suppose \(R\) is a UFD and \(f(x) \in R[x]\) is primitive and irreducible. Then \(f(x)\) is a prime element.
Theorem. Let \( f, g \in F[x] \), and assume that \( g \) is nonzero. Then there are polynomials \( q, r \in F[x] \) such that \[ f = qg + r, \] where \( r = 0 \) or \( \deg(r) < \deg(g) \). Furthermore, \( q \) and \( r \) are unique.
"Galois Theory", Second Edition, David A. Cox, Page 522
Corollary. Let \( f \in F[x] \) be non-constant. Then \( f \) has at most \(\deg(f)\) roots in the field \( F \).
"Galois Theory", Second Edition, David A. Cox, Page 522
sage: # Define the polynomial ring over QQ
sage: R.<x> = QQ['x']
sage:
sage: # Define the polynomials
sage: f = x^3 + 2*x^2 + 3* + 4
sage: g = x + 1
sage:
sage: # Perform the division to get quotient and
remainder
sage: quotient, remainder = f.quo_rem(g)
sage:
sage: # Display the results
sage: print("Quotient:", quotient)
Quotient: x^2 + x + 2
sage: print("Remainder:", remainder)
Remainder: 2
\(\textbf{Definition.}\) The \(\textit{polynomial ring}\) in the variables \( x_1, x_2, \ldots, x_n \) with coefficients in \( R \), denoted \( R[x_1, x_2, \ldots, x_n] \), is defined inductively by \[ R[x_1, x_2, \ldots, x_n] = R[x_1, x_2, \ldots, x_{n-1}][x_n]. \]
\(\textbf{Adjoining Elements.}\) We next show how to describe some interesting subrings and subfields of a given extension \( F \subset L \). Given \( \alpha_1, \ldots, \alpha_n \in L \), we define \[ F[\alpha_1, \ldots, \alpha_n] = \{ h(\alpha_1, \ldots, \alpha_n) \mid h \in F[x_1, \ldots, x_n] \}. \] Hence \( F[\alpha_1, \ldots, \alpha_n] \) consists of all polynomial expressions in \( L \) that can be formed using \( \alpha_1, \ldots, \alpha_n \) with coefficients in \( F \).
\(\textbf{Note. }\)Keep in mind that \(F[\alpha_1, \ldots, \alpha_n]\) is a ring, not necessarily a field.
For example let Let \(F = \mathbb{Q}\) and \(\mathbb{Q}[\sqrt{2}+\sqrt{3}]\) is not a field. We know that \(1 + \sqrt{2} + \sqrt{3}\in \mathbb{Q}[\sqrt{2}+\sqrt{3}]\). However, we can see that \(\frac{1}{1 + \sqrt{2} + \sqrt{3}}\) is not in \(\mathbb{Q}[\sqrt{2}+\sqrt{3}]\).
Let \[ F(\alpha_1, \ldots, \alpha_n) = \left\{ \frac{\alpha}{\beta} \mid \alpha, \beta \in F[\alpha_1, \ldots, \alpha_n], \beta \neq 0 \right\}. \] Thus \( F(\alpha_1, \ldots, \alpha_n) \) is the set of all rational expressions in the \( \alpha_i \) with coefficients in \( F \).
\(\textbf{Lemma. }\)\( F(\alpha_1, \ldots, \alpha_n) \) is the smallest subfield of the field \( L \) containing \( F \) and \( \alpha_1, \ldots, \alpha_n \).
\(\textbf{Proof. }\) Let \(L\) be a field containing \(F\) and \(\alpha_1, \ldots, \alpha_n\). Firstly, we show that \( F(\alpha_1, \ldots, \alpha_n) \) is a subfield of \( L \). Thus, to prove the lemma, we must show that if \( K \) is a subfield of \( L \) containing \( F \) and \( \alpha_1, \ldots, \alpha_n \), then \( F(\alpha_1, \ldots, \alpha_n) \subseteq K \). This is what "smallest" means in the statement of the lemma. Suppose that \( K \subseteq L \) contains \( F \) and \( \alpha_1, \ldots, \alpha_n \). Since \( K \) is closed under multiplication and addition, it follows that \( p(\alpha_1, \ldots, \alpha_n) \in K \) for any polynomial \( p \in F[x_1, \ldots, x_n] \). This shows that \( F[\alpha_1, \ldots, \alpha_n] \subseteq K \). Then \( F(\alpha_1, \ldots, \alpha_n) \subseteq K \) follows immediately, since \( K \) is a field. Since \( F(\alpha_1, \ldots, \alpha_n) \) is a subfield of \( L \) containing \( F \), we get extensions \[ F \subseteq F(\alpha_1, \ldots, \alpha_n) \subseteq L. \] We say that \( F(\alpha_1, \ldots, \alpha_n) \) is obtained from \( F \) by adjoining \( \alpha_1, \ldots, \alpha_n \in L \).
Theorem. For a prime \( p \) and a monic irreducible \( \pi(x) \) in \( \mathbb{F}_p[x] \) of degree \( n \), the ring \( \mathbb{F}_p[x]/(\pi(x)) \) is a field of order \( p^n \).
Proof. The cosets mod \( \pi(x) \) are represented by remainders \[ c_0 + c_1 x + \cdots + c_{n-1} x^{n-1}, \quad c_i \in \mathbb{F}_p, \] and there are \( p^n \) of these. Since the modulus \( \pi(x) \) is irreducible, the ring \( \mathbb{F}_p[x]/(\pi(x)) \) is a field using the same proof that \( \mathbb{Z}/(m) \) is a field when \( m \) is prime. \(\blacksquare\)
\(\textbf{Definition 5.1.1}\) Let \( f \in F[x] \) have degree \( n > 0 \). Then an extension \( F \subset L \) is a splitting field of \( f \) over \( F \) if
\textbf{Definition.} Let \( K \) be a field and let \( f(x) \) be a polynomial in \( K[x] \). We say that \( f(x) \) \textit{splits} in \( K \) if there are elements \( \alpha_1, \alpha_2, \ldots, \alpha_n \) of \( K \) such that \[ f(x) = \lambda(x - \alpha_1)(x - \alpha_2)\ldots(x - \alpha_n). \] We say that a field extension \( L/K \) is a \textit{splitting field} if \( f(x) \) splits in \( L \) and there is no proper intermediary subfield \( M \) in which \( f(x) \) splits.
\(\textbf{Definition. }\) For a finite extension of fields \( F \subseteq K \), \( \alpha \in K \) is a \(\textit{primitive element}\) if \( K = F(\alpha) \).
\(\textbf{Primitive Element Theorem. }\) Suppose \( F \subseteq K \) is an extension of fields satisfying \( [K : F] \lt \infty \). If \( \mathbb{Q} \subseteq F \), then there exists a primitive element \( \alpha \in K \) such that \( K = F(\alpha) \).
\(\textbf{Proposition. }\) Let \( 0 \neq f(x) \in F[x] \) be a non-constant polynomial. The following are equivalent:
\(\textbf{Corollary. }\) If \( F \) is a field containing \( \mathbb{Q} \) and \( p(x) \in F[x] \) is irreducible, then \( p(x) \) has distinct roots in \( K \), the splitting of \( p(x) \) over \( F \).
\(\textbf{Proposition.}\) Let \(\alpha \in L\) be algebraic over \(F\), and let \(p \in F[x]\) be its minimal polynomial. If \(f \in F[x]\) is a non-constant monic polynomial, then \[ f = p \Leftrightarrow f \text{ is a polynomial of minimal degree satisfying } f(\alpha) = 0 \Leftrightarrow f \text{ is irreducible over } F \text{ and } f(\alpha) = 0. \]
Let us briefly talk about how to use SageMath to find the minimal polynomial of an element. Suppose that your \(\alpha = \sqrt{2}\).
sage: var('a')
sage: a = sqrt(2); a
sqrt(2)
sage: p = a.minpoly();
p
x^2 - 2
Proposition. A splitting field of a polynomial of degree \( n \) over \( F \) is of degree at most \( n! \) over \( F \).
"Abstract Algebra", Page 538
\(\textbf{Definition. }\) Suppose that \(F\subset K\) and \([K: F]\lt\infty\). We say that \(\alpha\in K\) is a \(\textit{primitive element}\) for \(F\subset K\) if \(K = F(\alpha)\).
\(\textbf{Lemma. }\) An extension \(F \subset L\) has degree \([L: F] = 1\) if and only if \(F = L\).
\(\textbf{Proposition. }\) Let \(F \subset K\) and \([K: F]\lt\infty, |F| = \infty\). Then there exists a primitive element for \(F\subset K\) if and only if there are only finitely many intermediate fields between \(F\) and \(K\).
\(\textbf{Corollary. }\) Suppose that \(\mathbb{Q}\subset F\) and \([K: F]\lt\infty\). Then there are only finitely many intermediate fields between \(F\) and \(K\).
\(\textbf{Proposition.}\) Suppose that \([K: F]\lt\infty\). Then every \(\alpha \in K\) is algebraic over \(F\).
\(\textbf{Theorem.}\) Suppose that \(F\subset E\subset K\), and \(E\) is algebraic over \(F\) and \(K\) is algebraic over \(E\). Then \(K\) is algebraic over \(F\).
\(\textbf{Lemma.}\) Assume that \(F \subset L\) is a field extension, and let \(\alpha \in L\) be algebraic over \(F\) with minimal polynomial \(p \in F[x]\). Then there is a unique ring isomorphism \[ F[\alpha] \cong F[x]/(p) \] that is the identity on \(F\) and maps \(\alpha\) to the coset \(x + (p)\).
\(\textbf{Proposition.}\) Assume that \(F \subset L\) is a field extension, and let \(\alpha \in L\). Then \(\alpha\) is algebraic over \(F\) if and only if \(F[\alpha] = F(\alpha)\).
\(\textbf{Theorem.}\) Let \( F \subseteq K \) be fields such that \( K \) is the splitting field of \( f(x) \) over \( F \). If the irreducible polynomial \( p(x) \in F[x] \) has a root in \( K \), then it splits over \( K \).
\(\textbf{Crucial Proposition. }\) Let \( F_1 \subseteq K_1 \), \( F_2 \subseteq K_2 \) be fields, \( p_1(x) \in F_1[x] \), \( p_2(x) \in F_2[x] \) be monic irreducible polynomials of degree \( d \), and \( \alpha_1 \in K_1 \), \( \alpha_2 \in K_2 \) roots of \( p_1(x) \) and \( p_2(x) \), respectively. Suppose \( \sigma : F_1 \to F_2 \) is an isomorphism such that \( p_2(x) = p_1(x)^\sigma \). Then there exists an isomorphism \( \tilde{\sigma} : F_1(\alpha_1) \to F_2(\alpha_2) \) extending \( \sigma \) such that \( \tilde{\sigma}(\alpha_1) = \alpha_2 \). We noted that in the proposition, \( p_1(x)^\sigma \) denotes the polynomial in \( F_2[x] \) obtained by applying \( \sigma \) to the coefficients of \( p_1(x) \).
\(\textbf{Definition. }\)A polynomial \(f\in F [x]\) is separable if it is non-constant and its roots in a splitting field are all simple.
\(\textbf{Theorem. }\)A nonzero polynomial in \(K[X]\) is separable if and only if it is relatively prime to its derivative in \(K[X]\).
\(\textbf{Proof. }\)We firstly show that if a nonzero polynomial in \(K[X]\) is separable, then it is relatively prime to its derivative in \(K[X]\). Suppose that \(f(x)\in K[X]\) is separable, then we have \(f(x) = (x - \alpha)h(x)\) for some \(\alpha\) in the extension of \(K\). Note that \((x - \alpha)\) is the distinct factor of \(f\) because of the definition of separable. In other words, \(h(\alpha)\neq 0\). Then we have \[ \begin{align} f'(x) &= h(x) + (x - \alpha)h'(x). \\ f'(\alpha) &= h(\alpha) + (\alpha - \alpha)h'(\alpha) = h(\alpha) \neq 0. \\ \end{align} \] In that case, we can know that \(f\) and \(f'\) do not share the same root, which implies that they do not have the same factor. Hence, we have \(f\) and \(f'\) are relatively prime. For the other direction, we will prove it with contrapositive. Suppose that \(f(x)\) is not separable, given the definition of separable, we can know that \(f(x)\) has a repeated roots. Then we have \(f(x) = (x - \alpha)^2h(x)\) for some \(\alpha\) in the extension of \(K\). Hence, we can get \[ \begin{align} f'(x) &= 2(x - \alpha)h(x) + (x - \alpha)^2h'(x)\\ f'(\alpha) &= 2(\alpha - \alpha)h(\alpha) + (\alpha - \alpha)^2h'(\alpha) = 0. \\ \end{align} \] Therefore, we can know that \(f\) and \(f'\) have some similar factor. Hence, we have \(f\) and \(f'\) are not relatively prime. \(\blacksquare\)
\(\textbf{Definition.}\) Let \(F\) be a field and \(K\) be a subfield of \(F\). An element \(\alpha \in F\) is said to be \(\textbf{algebraic over}\) \(K\) if there exists a non-zero polynomial \(f(x) \in K[x]\) such that \(f(\alpha) = 0\). In other words, \(\alpha\) is a root of a non-zero polynomial with coefficients in \(K\).
\(\textbf{Definition.}\) An \(\textbf{algebraic extension}\) \(L\) of a field \(K\) is a field extension such that every element of \(L\) is algebraic over \(K\). In other words, if \(L\) is an algebraic extension of \(K\), then for every \(\alpha \in L\), there exists a non-zero polynomial \(f(x) \in K[x]\) such that \(f(\alpha) = 0\).
\(\textbf{Definition.}\) A field \(\mathbb{F}\) is algebraically closed if every non-constant polynomial in \(\mathbb{F}[x]\) has a root in \(\mathbb{F}\).
\(\textbf{Definition.}\) Let \(F\) be a field. The set of all elements in \(F\) that are algebraic over \(K\) is denoted by \(\overline{K}\) and is called the \(\textbf{algebraic closure}\) of \(K\) in \(F\).
\(\textbf{Alternate Definition.}\) An algebraic closure of a field \(F\) is an algebraic extension \(K\) of \(F\) such that every polynomial in \(F[x]\) splits in \(K\).
\(\textbf{Theorem.}\) Let \( F \) be a field, then there exists an algebraic closure for \( F \), i.e., a field \( \overline{F} \subseteq F \) such that \( \overline{F} \) is algebraically closed and algebraic over \( F \).
\(\textbf{Proposition. }\) Let \( F \subset L \) be a finite extension and let \( \sigma \in \text{Gal}(L/F) \). Then:
Definition. Let \( F \subseteq L \) be a finite extension. Then \( \text{Gal}(L/F) \) is the set \[ \{\sigma: L \rightarrow L \mid \sigma \text{ is an automorphism, } \sigma(a) = a \text{ for all } a \in F\}. \] In other words, \( \text{Gal}(L/F) \) consists of all automorphisms of \( L \) that are the identity on \( F \). The basic structure of \( \text{Gal}(L/F) \) is as follows.
Definition. Let \( K / F \) be a finite extension. Then \( K \) is said to be Galois over \( F \) and \( K / F \) is a Galois extension if \( | \text{Aut}(K / F) | = [K : F] \). If \( K / F \) is Galois the group of automorphisms \( \text{Aut}(K / F) \) is called the Galois group of \( K / F \), denoted \( \text{Gal}(K / F) \).
Corollary. If \( K \) is the splitting field over \( F \) of a separable polynomial \( f(x) \) then \( K / F \) is Galois.
"Abstract Algebra", Page 562
Definition. If \( f(x) \) is a separable polynomial over \( F \), then the Galois group of \( f(x) \) over \( F \) is the Galois group of the splitting field of \( f(x) \) over \( F \).
"Abstract Algebra", Page 563
\(\textbf{Theorem. }\) If \( L \) is the splitting field of a separable polynomial in \( F[x] \), then the Galois group of \( f \) over \( F \) has order \( \left| \text{Gal}(L/F) \right| = [L : F] \).
\(\textbf{Proposition. }\) Let \( F \subseteq L \) be a finite extension. Then \( \text{Gal}(L/F) \) is a group under composition of functions.
\(\textbf{Proof.}\) We need to check the following properties:
We firstly show that \( \text{Gal}(L/F) \) is closed under composition of functions.
sage: R.<x>
= QQ[]
sage: K.<a>
= NumberField(x^3 + x^2 + 2)
sage: G = K.galois_group(); G
Galois group 3T2 (S3) with order 6 of x^3 + x^2 + 2
sage: G.easy_order()
6
sage: G.list()
[(),
(1,2,3)(4,5,6),
(1,3,2)(4,6,5),
(1,4)(2,6)(3,5),
(1,5)(2,4)(3,6),
(1,6)(2,5)(3,4)]
\(\textbf{K-State 2022(1)}\) Let \( H \) be a subgroup of a group \( G \). Consider the subgroup \[ L = \{ (h, h) \mid h \in H \} \] of \( H \times G \). Prove that \( L \) is a normal subgroup in \( H \times G \) if and only if \( H \) is contained in the center of \( G \).
\(\textbf{Proof. }\)Firstly, we show that \( L \) is a normal subgroup in \( H \times G \) if \( H \) is contained in the center of \( G \) (i.e. \(Z(G)\)). Suppose that \(H\subset Z(G)\). Let \((h,g)\in H\times G\) and \((h', h')\in L\). Then \[ \begin{align} (h, g)(h', h')(h, g)^{-1} &= (h, g)(h', h')(h^{-1}, g^{-1})\\ &= (h, g)(h'h^{-1}, h'g^{-1})\\ &= (h h' h^{-1}, g h'g^{-1})\\ \end{align} \] Since \(H\subset Z(G)\), we have \(ghg^{-1}=h\) for any \(g\in G\). Thus, we have \[ (h, g)(h', h')(h, g)^{-1} = (h h' h^{-1}, h h' h^{-1}) = (h', h')\in L. \] Hence, we show that \(L\) is a normal subgroup in \(H\times G\). Now we show that if \(L\) is a normal subgroup in \(H\times G\), then \(H\subset Z(G)\). Suppose that \(L\) is a normal subgroup in \(H\times G\). Let \(h\in H\) and \(g\in G\). Then \[ \begin{align} (h, g)(h_1, h_1)(h, g)^{-1} &= (h_2, h_2)\\ \end{align} \] for some \(h_1, h_2\in H\). Since \((e, g)\in H\times G\), we have \[ \begin{align} (e, g)(h_1, h_1)(e, g)^{-1} &= (e, g)(h_1, h_1)(e, g^{-1})\\ &= (eh_1e, gh_{1}g^{-1})\\ &= (h_1, gh_1g^{-1})\\ &= (h_2, h_2)\\ \end{align} \] Hence, we can get \(h_1=h_2\). Thus, we have \(gh_1g^{-1}=h_1\) for any \(g\). Therefore, we show that \(H\subset Z(G)\). \(\blacksquare\)
Definition. Let \( A \) be a ring (commutative, as always). An \( A \)-module is an abelian group \( M \) (written additively) on which \( A \) acts linearly: more precisely, it is a pair \( (M, \mu) \), where \( M \) is an abelian group and \( \mu \) is a mapping of \( A \times M \) into \( M \) such that, if we write \( ax \) for \( \mu(a, x) \) (\( a \in A, x \in M \)), the following axioms are satisfied: \[ \begin{aligned} a(x + y) &= ax + ay, \\ (a + b)x &= ax + bx, \\ (ab)x &= a(bx), \\ 1x &= x \quad (a, b \in A; \, x, y \in M). \end{aligned} \] (Equivalently, \( M \) is an abelian group together with a ring homomorphism \( A \to E(M) \), where \( E(M) \) is the ring of endomorphisms of the abelian group \( M \).)
Examples.
M. F. Atiyah, I.G. Macdonald, Introduction to Commutative Algebra, Addison-Wesley. Page 17.
We will explore the reason why \(A\)-module is a \(k\)-vector space when \(A = k[x]\) where \(k\) is a field.
Let \( V \) be a \( \mathbb{K} \)-vector space, and let \( T : V \rightarrow V \) be \(\textbf{any}\) linear operator. Then, \( V \) gets a structure of \( \mathbb{K}[x] \)-module if we define \[ \forall v \in V, \quad x \cdot v := Tv \] and then we extend this action in the obvious way, meaning: \[ \sum_{j=0}^{m} k_j x^j \cdot v := \sum_{j=0}^{m} k_j T^j v, \quad k_j \in \mathbb{K} \]
\(K[x]\)-modules are \(K\)-vector spaces with a linear transformation
Definition. (Free modules) Given a set \( T \), we denote by \( A^{(T)} \) the module \[ \bigoplus_{t \in T} M_t, \] where each \( M_t = A \). For each \( t \in T \), we denote the canonical injection \( A = M_t \to A^{(T)} \) by \( j_t \), and we denote by \( e_t \) the element \( j_t(1) \). Let \( \phi: T \to A^{(T)} \) denote the mapping \( t \mapsto e_t \).
Purdue Math Department Lecture 5 Note
Definition. A free generator set is a set of elements in the module such that:
Definition. The rank of a free module \(M\) over an arbitrary ring \(R\) is defined as the number of its free generators.
A free \( A \)-module is one which is isomorphic to an \( A \)-module of the form \( \bigoplus_{i \in I} M_i \), where each \( M_i \cong A \) (as an \( A \)-module). The notation \( A^{(I)} \) is sometimes used. A finitely generated free \( A \)-module is therefore isomorphic to \( A \oplus \cdots \oplus A \) (\( n \) summands), which is denoted by \( A^n \). (Conventionally, \( A^0 \) is the zero module, denoted by 0.)
M. F. Atiyah, I.G. Macdonald, Introduction to Commutative Algebra, Addison-Wesley. Page 21.
Examples. A polynomial ring \(R[x]\) is finitely generated by \(\{1, x\}\) as a ring, but not as a module.
Definition. (IBN property) A ring \( A \) is said to have the IBN (invariant basis number) property if whenever \( A^m \cong A^n \), with \( m, n \in \mathbb{N} \), we have \( m = n \).