× Linear Transformation Multiplicity Dual Space Minimal Polynomial \(T\)-Complement Cyclic Operator Indecomposable Operator Characteristic Polynomial Diagonalizable Smith Normal Form Normal Subgroup Isomorphism Center Centralizer Normalizer Stabilizer Orbit \(p\)-group Sylow \(p\)-Subgroup Sylow's Theorem Simple Group Solvable Group Field Integral Domain Ascending Chain Condition Principal Ideal Domain Unique Factorization Domain Polynomial Ring Division Algorithm in SageMath Prime and Irreducible Adjoining Element Splitting Field Minimal Polynomial Field Extension Separable Algebraic Closure Galois Group Galois Theory KU 2015 (January) KU 2015 (August) KU 2016 (January) KU 2016 (August) KU 2017 (January) KU 2017 (August) KU 2018 (January) KU 2018 (August) KU 2019 (January) KU 2019 (August) KU 2020 (January) KU 2020 (August) KU 2021 (January) KU 2021 (August) KU 2022 (January) KU 2022 (August) KU 2023 (January) KU 2023 (August) KU 2024 (January)
☰ Menu

KU 2022 (January) ALGEBRA QUALIFYING EXAM

My name is Hanzhang Yin, and I have developed this website as a resource to facilitate the review of key concepts in abstract algebra, and it is not fully completed. Feel free to email me at hanyin@ku.edu if there are any errors or suggestions for improvement.

\(\textbf{Problem 1.}\)A square matrix \( A \) is called \(\textit{aperiodic}\) if \( A^m = A^n \) for some integers \( m > n \geq 0 \) (by convention \( A^0 = I \)).

  • Prove that a \( 2 \times 2 \) matrix \( A \) over the real numbers is aperiodic if and only if: \( A^2 = \pm A \) or \( A^m = I \) for some \( m > 0 \).
  • Give an example of a \( 2 \times 2 \) matrix \( A \) over the real numbers such that \( A^{2022} = I \) but \( A^m \neq I \) for any positive integer \( m \lt 2022 \).

Proof(1). Let \( A \) be a \( 2 \times 2 \) matrix over the real numbers. If \( A^2 = \pm A \), then we have \( A^4 = A^2 A^2 = (\pm A)(\pm A) = A^2\). If \( A^m = I \) for some \( m > 0 \), then we have \( A^{m+1} = A^m\cdot A = I\cdot A = A \), where \(m+1 > 1\). For the other direction, without loss of generality, given \(A^m = A^n\) where \(m\gt n\). If \(A\) is invertible, then we have \(A^{m-n} = I\) where \(m-n > 0\). Firstly, we show that if \(A\) is non-invertible, then we can know that of the eigenvalue is \(0\). Since \(A\) is non-invertible, we can know that \(\text{det}(A) = 0\). Then we know the characteristic polynomial of \(A\) is \(\chi_A(x) = \text{det}(A - xI)\). When we plug in \(x = 0\), then we have \(\chi_A(0) = \text{det}(A - 0I) = \text{det}(A) = 0\). In that case, we have \(0\) is an eigenvalue of \(A\). Now we know one of the eigenvalues is \(0\) and we assume that the other eigenvalue is \(\lambda\). In that case we can denote the \(\chi_A(x) = (x - 0)(x - \lambda) = x(x - \lambda)\). According to the Cayley-Hamilton theorem, we have \(\chi_A(A) = A(A - \lambda I) = A^2 - \lambda A = 0\). Then we have \(A^2 = \lambda A\), which implies that \(\frac{1}{\lambda}A^2 = A\). Given we have \(A^m = A^n\). We have \[ \begin{align} A^n &= (\frac{1}{\lambda}A^2)^n \\ &= \frac{1}{\lambda^n}A^{2n} \\ A^m &= \frac{1}{\lambda^n}A^{2n} \\ \lambda^n A^m &= A^{n}A^n \\ \lambda^n A^m &= A^{m}A^m\\ \lambda^n A^m &= A^{2m}\\ \end{align} \] At the same time, it is not hard to get \(A^{2m} = \lambda^mA^m\). Then we have \(\lambda^n A^m = \lambda^m A^m\), which implies that \[ A^m(\lambda^m - \lambda^n) = 0. \] Then, we can know that \(A^m = 0\) or \(\lambda^m = \lambda^n\). It \(A^m = 0\), then we can know that \(A\) is nilpotent. We know that \(A\) is nilpotent, then we can know that \(A\) has only one eigenvalue which is \(0\). (need to be proved.) In that case, we have \(A^2 = 0 \). Then we can get that \(A = 0\) with conditions of \(\text{det}(A) = 0\). If \(\lambda^m = \lambda^n\), then we have \(\lambda^n(\lambda^{m-n} - 1)\). Again, if \(\lambda \neq 0\), then we have \(\lambda^{m-n} = 1\) where \(\lambda\in \mathbb{R}\). Then we have \(\lambda = \pm 1\). In that case, we have \(A^2 = \lambda A = \pm A\).\(\blacksquare\)

\(\textbf{Solution(2). }\) Let \( A = \begin{pmatrix} \cos(\frac{2\pi}{2022}) & -\sin(\frac{2\pi}{2022}) \\ \sin(\frac{2\pi}{2022}) & \cos(\frac{2\pi}{2022}) \end{pmatrix} \). Then we have \( A^{2022} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I \) and \( A^m \neq I \) for any positive integer \( m \lt 2022 \).


Question 2. Recall that a square complex matrix \(A\) is called normal if \(AA^* = A^*A\).

a). Prove that \(A\) is normal if and only if it is unitarily equivalent to a diagonal matrix. (You can quote the fact that any square complex matrix is unitarily equivalent to an upper triangular one)

b). Prove that a \(2 \times 2\) normal matrix with real entries must be symmetric or has the form \(rB\) where \(B\) is a rotation matrix and \(r\) is a real number.

Solution a). Suppose that \(A\) is normal, then we have \(AA^* = A^*A\). We know that any square complex matrix is unitarily equivalent to an upper triangular matrix, we let \(A = URU^*\) where \(U\) is a unitary matrix. Hence, we have \(A^* = UR^*U^*\), where \(R^*\) is the conjugate transpose of \(R\) and it is lower triangular. Then we have \[ \begin{align} URU^*UR^*U^* &= UR^*U^*URU^*. \\ R^*R &= RR^* \end{align} \] Denote \(R = \begin{pmatrix} r_{ij} \end{pmatrix}\), then we have \(R^* = \begin{pmatrix} l_{ij} \end{pmatrix}\), where \(l_{ij} = \overline{r_{ji}}\). Then we can know that \(r_{ij} = 0\) when \(i \gt j\), and \(l_{ij} = 0\) when \(i \lt j\). Then, we can have the \((i, j)\)th entry of \(RR^*\) is \[ \begin{align} RR^*_{ij} &= \sum_{k=1}^n r_{ik}l_{kj} = \sum_{k=1}^n r_{ik}\overline{r_{jk}}\\ \end{align} \]

Proof b). Suppose that \(A\) is a normal real \(2\times 2\) matrix such as \[ A = \begin{pmatrix} a & b\\ c & d \end{pmatrix}. \] Then we have \(AA^T = A^TA\), which implies that \[ \begin{align} \begin{pmatrix} a & b\\ c & d \end{pmatrix} \begin{pmatrix} a & c\\ b & d \end{pmatrix} &= \begin{pmatrix} a & c\\ b & d \end{pmatrix} \begin{pmatrix} a & b\\ c & d \end{pmatrix}\\ \begin{pmatrix} a^2 + b^2 & ac + bd\\ ac + bd & c^2 + d^2 \end{pmatrix} &= \begin{pmatrix} a^2 + c^2 & ab + cd\\ ab + cd & b^2 + d^2 \end{pmatrix}. \end{align} \] Then we have \(a^2 + b^2 = a^2 + c^2\), which implies that \(b^2 = c^2\). It shows that \( b = \pm c\). If \(b = c\), then we have \(A\) is symmetric. If \(b = -c\), then we have \(A\) has the form \[ A = \begin{pmatrix} a & b\\ -b & d \end{pmatrix}. \] Then we can get \[ \begin{pmatrix} a^2 + b^2 & -ab + bd\\ -ab + bd & b^2 + d^2 \end{pmatrix} = \begin{pmatrix} a^2 + b^2 & ab - bd\\ ab - bd & b^2 + d^2 \end{pmatrix}. \] We can have \(-ab + bd = ab - bd\), which implies that \(ab = bd\). Then we have \(b(a - d) = 0\), which implies that \(b = 0\) or \(a = d\). If \(b = 0\), then again we have \(b = c = 0\) and \(A\) is symmetric. If \(b\neq 0\) and \(a = d\), then we have \[ A = \begin{pmatrix} a & b \\ -b & a \end{pmatrix}. \] Let \(r^2 = a^2 + b^2\) for some \(t\in \mathbb{R}\), Then, we have \(0\leq \frac{a^2}{r^2}\lt 1\) since \(b\neq 0\). Hence, we have \(-1\leq \frac{a}{r} \lt 1\). Now, we can know that there exists \(\theta\) such that \(\cos(\theta) = \frac{a}{t}\) and \(a = r\cos(\theta)\). Now, fix \(r\). Then we have \[ \begin{align} a^2 + b^2 = r^2 r^2\cos^2(\theta) + b^2 &= r^2\\ r^2 &= t^2(1 - \cos^2(\theta))\\ b^2 &= r^2\sin^2(\theta)\\ \end{align} \] Hence, we know that \(b\) could be \(r\sin(\theta)\) or \(-r\sin(\theta)\). Therefore, we have \[ A = \begin{pmatrix} r\cos(\theta) & r\sin(\theta)\\ -r\sin(\theta) & r\cos(\theta) \end{pmatrix} = r\begin{pmatrix} \cos(\theta) & \sin(\theta)\\ -\sin(\theta) & \cos(\theta) \end{pmatrix}. \] or \[ A = \begin{pmatrix} r\cos(\theta) & -r\sin(\theta)\\ r\sin(\theta) & r\cos(\theta) \end{pmatrix} = r\begin{pmatrix} \cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta) \end{pmatrix}. \blacksquare \]

\(\textbf{Problem 3. }\) Find the Jordan canonical form and a Jordan basis for \( A = \begin{pmatrix} 2 & 2 & -1 \\ 0 & 3 & 0 \\ 1 & -2 & 4 \end{pmatrix}. \)

\(\textbf{Solution. }\) Firstly, we calculate the characteristic polynomial \[ \begin{align} \chi_A(x) &= \text{det}\left| \begin{matrix} x - 2 & 2 & 1\\ 0 & x-3 & 0\\ -1 & x + 2 & x - 4\\ \end{matrix} \right|\\ &= -(x-3)((x-2)(x-4) + 1)\\ &= -(x-3)(x^2 - 6x + 9)\\ &= - (x - 3)^3 \end{align} \] Now we try to identity the minimal polynomial, which only have two options for us: \((x - 3)^2\) or \((x - 3)^3\). When we plug in \(A = x\) to \((x - 3)^2\), we have \((A - 3)^2 = 0\). Hence, we have the minimal polynomial which is \((x - 3)^2\). Hence, we have the rational canonical form and Jordan canonical form \[ R = \begin{pmatrix} 0 & -9 & 0\\ 1 & 6 & 0\\ 0 & 0 & 3\\ \end{pmatrix} \qquad J = \begin{pmatrix} 3 & 0 & 0\\ 1 & 3 & 0 \\ 0 & 0 & 3\\ \end{pmatrix} \] Since the minimal polynomial is \((x - 3)^2\), we want to find a vector \(v\not\in \ker(A - 3)\) as a maximal vector. We find out that \(v = (1, 0, 0)^T\), and we have the eigenvectors \((1, 0, -1)^T, (0, 1, 2)^T\). We calculate the \(A\cdot v= (2, 0, 1)^T\). In order to make basis, we will choose \(v, Av\). For the last vector, we will use an eigenvector which is linearly independent to \(v, Av\), which is \((0, 1, 2)^T\). Thus, the basis for the rational canonical form is \[ P_C = \begin{pmatrix} 1 & 2 & 0\\ 0 & 0 & 1\\ 0 & 1 & 2\\ \end{pmatrix} \] Similarly, we can find the Jordan basis \[ P_C = \begin{pmatrix} 1 & -1 & 0\\ 0 & 0 & 1\\ 0 & 1 & 2\\ \end{pmatrix} \]

Question 4.

(a) Prove that any subgroup of \( \mathbb{Z}^2 \) can be generated by at most 2 elements. (You can use the fact that such a subgroup is finitely generated)

(b) Let \( G \) be the subgroup of \( \mathbb{Z}^2 \) generated by \( \{(a, b); (c, d)\} \). Suppose that \( \gcd(a, c) = 1 \). Prove that the quotient \( \mathbb{Z}^2 / G \) is isomorphic to \( \mathbb{Z} / \delta \mathbb{Z} \) where \( \delta = |ad - bc| \).

Proof (a). Suppose that \(G\subset \mathbb{Z}^2\) is a subgroup of \( \mathbb{Z}^2 \) and it is finitely generated by \(n\) elements. Then we have \(G = \langle g_1, g_2, \ldots, g_n \rangle\), where \(g_i = \begin{bmatrix} a_i \\ b_i \end{bmatrix}\). Let \(d = \gcd(a_1, a_2, \ldots, a_n)\). Then we have \(d\mid a_i\) for all \(i\). Hence, we can find \(c_i\)s where \(c_i\in\mathbb{Z}\) such that \[ d = \sum_{i=1}^n c_ia_i. \] Fix each \(c_i\) and we denote \(b = \sum_{i=1}^n c_ib_i\). Since \(d\) is a divisor for each \(a_i\), we have \(r_id = a_i\) for some \(r_i\in\mathbb{Z}\). Then we have \[ \begin{bmatrix} a_i \\ b_i \end{bmatrix} = r_i\begin{bmatrix} d \\ b \end{bmatrix} + \begin{bmatrix} 0 \\ b_i - r_ib \end{bmatrix}. \] Given that each \(b_i - r_ib\) is in \(\mathbb{Z}\), we can find \(d'\in \mathbb{Z}\) such that \(d' = \gcd(b_1 - r_1b, b_2 - r_2b, \ldots, b_n - r_nb)\). And we have \(s_i = \frac{b_i - r_ib}{d'}\in \mathbb{Z}\) for all \(i\). Hence, we can have \[ \begin{bmatrix} a_i \\ b_i \end{bmatrix} = r_i\begin{bmatrix} d \\ b \end{bmatrix} + s_i\begin{bmatrix} 0 \\ d' \end{bmatrix}. \] Suppose that \(\begin{bmatrix} x \\ y \end{bmatrix} \in G\), then we have \[ \begin{align} \begin{bmatrix} x \\ y \end{bmatrix} &= k_1 \begin{bmatrix} a_1 \\ b_1 \end{bmatrix} + k_2 \begin{bmatrix} a_2 \\ b_2 \end{bmatrix} + \cdots + k_n \begin{bmatrix} a_n \\ b_n \end{bmatrix}\\ &= k_1\left(r_1\begin{bmatrix} d \\ b \end{bmatrix} + s_1\begin{bmatrix} 0 \\ d' \end{bmatrix}\right) + k_2\left(r_2\begin{bmatrix} d \\ b \end{bmatrix} + s_2\begin{bmatrix} 0 \\ d' \end{bmatrix}\right) + \cdots + k_n\left(r_n\begin{bmatrix} d \\ b \end{bmatrix} + s_n\begin{bmatrix} 0 \\ d' \end{bmatrix}\right)\\ &= (k_1r_1 + k_2r_2 + \cdots + k_nr_n) \begin{bmatrix} d \\ b \end{bmatrix} + (k_1s_1 + k_2s_2 + \cdots + k_ns_n) \begin{bmatrix} 0 \\ d' \end{bmatrix}. \end{align} \] Thus, we can tell that \(G\) is generated by \( \begin{bmatrix} d \\ b \end{bmatrix} \) and \( \begin{bmatrix} 0 \\ d' \end{bmatrix} \), which implies that \(G\) is generated by at most 2 elements. \(\blacksquare\)

Proof (b). Firstly, we have matrix \[ A = \begin{bmatrix} a & c \\ b & d \end{bmatrix}. \] Given that \(\gcd(a, c) = 1\), we can have \(r, s\in \mathbb{Z}\) such that \(ra + sc = 1\) by \(\textbf{Bézout's identity}\). Then we have \[ \begin{align} \begin{bmatrix} a & c \\ b & d \end{bmatrix}\cdot \begin{bmatrix} r & -c \\ s & a \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ rb + sd & ad - bc \end{bmatrix}.\\ \begin{bmatrix} 1 & 0 \\ -(rb + dx) & 1\\ \end{bmatrix}\cdot \begin{bmatrix} 1 & 0 \\ rb + sd & ad - bc \end{bmatrix} &= \begin{bmatrix} 1 & 0 \\ 0 & ad - bc \end{bmatrix}. \end{align} \] Hence, we can have \[ \begin{bmatrix} 1 & 0 \\ -(rb + ds) & 1 \end{bmatrix} \cdot \begin{bmatrix} a & c \\ b & d \\ \end{bmatrix} \cdot \begin{bmatrix} r & -c \\ s & a \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & ad - bc \end{bmatrix}. \] Let \(K_A = \left\{\alpha \begin{bmatrix} a \\ b\end{bmatrix}, \beta \begin{bmatrix} c \\ d\end{bmatrix}\, \Bigg|\, \alpha, \beta\in \mathbb{Z}\right\}\), then we have \[ K_A = \left\{\begin{bmatrix} 1 \\ 0\end{bmatrix}, \begin{bmatrix} 0 \\ ad - bc\end{bmatrix}, \Bigg|\, \alpha, \beta\in \mathbb{Z}\right\}, \] since there are invertible matrices \(P\) and \(Q\) such that \(PAQ = \begin{bmatrix} 1 & 0 \\ 0 & ad - bc\end{bmatrix}\). Hence, we can know that \(K_A\) is generated by at most 2 elements: \( \begin{bmatrix} 1 \\ 0\end{bmatrix} \) and \( \begin{bmatrix} 0 \\ ad - bc\end{bmatrix} \). Then we have \(K_A \cong \mathbb{Z}\oplus \mathbb{Z}/(ad - bc)\mathbb{Z}\). Therefore, \[ \mathbb{Z}^2 / G \cong (\mathbb{Z}\oplus \mathbb{Z}) / (\mathbb{Z}\oplus \mathbb{Z}/(ad - bc)\mathbb{Z}) \cong \mathbb{Z}/(ad - bc)\mathbb{Z}. \tag*{\(\blacksquare\)} \]

Question 5. Let \( R = \mathbb{Z}^2 \) equipped with the following addition and multiplication rules: \[ (a, b) + (c, d) = (a + c, b + d) \quad \text{and} \quad (a, b) \times (c, d) = (ac - bd, ad + bc). \] Prove that \( R \) is a PID.

Proof. Firstly, we want to show that \((R, +)\) forms an abelian group. Since \(\mathbb{Z}\) is an abelian group, we have \[ (a, b) + (c, d) = (a + c, b + d) = (c + a, d + b) = (c, d) + (a, b). \] We know that \((0, 0)\in \mathbb{Z}^2\) is the identity elements. Since \[ (a, b) + (0, 0) = (a + 0, b + 0) = (a, b), \] For any \((a, b)\in \mathbb{Z}^2\), we have \((-a, -b)\in \mathbb{Z}^2\) such that \[ (a, b) + (-a, -b) = (a - a, b - b) = (0, 0). \] Thus, there exists an inverse element for each element in \(\mathbb{Z}^2\). Now, given that \((a, b), (c, d), (e, f)\in \mathbb{Z}^2\), we have \[ (a, b) + ((c, d) + (e, f)) = (a, b) + (c + e, d + f) = (a + c + e, b + d + f) = (a + c, b + d) + (e, f) = ((a, b) + (c, d)) + (e, f). \] Thus, we have \((R, +)\) forms an abelian group. Then, we want to show that \(R\) forms a ring. We know that \((1, 0)\in \mathbb{Z}^2\) is the identity element for multiplication since for any \((a, b)\in \mathbb{Z}^2\), we have \[ \begin{align} (a, b)\times (1, 0) &= (a\cdot 1 - b\cdot 0, a\cdot 0 + b\cdot 1) = (a, b), \\ (1, 0)\times (a, b) &= (1\cdot a - 0\cdot b, 1\cdot b + 0\cdot a) = (a, b). \end{align} \] Now, we want to show that is associative. Given that \((a, b), (c, d), (e, f)\in \mathbb{Z}^2\), we have \[ \begin{align} (a, b)\times ((c, d)\times (e, f)) &= (a, b)\times (c e - d f, cf + de)\\ &= (a c e - a d f - a d f - b c f, a c f + a d e + b c e - b d f)\\ &= (ac - b d, a d + b c)\times (e, f)\\ &= ((a, b)\times (c, d))\times (e, f). \end{align} \] Then we have \((a, b)\times ((c, d)\times (e, f)) = ((a, b)\times (c, d))\times (e, f)\). Now, we want to see if the multiplication is distributive. Given that \((a, b), (c, d), (e, f)\in \mathbb{Z}^2\), we have \[ \begin{align} (a, b)\times ((c, d) + (e, f)) &= (a, b)\times (c + e, d + f)\\ &= (ac + ae - bd - bf, ad + af + bc + be)\\ &= (ac - bd, ad + bc) + (ae - bf, af + be)\\ &= (a, b)\times (c, d) + (a, b)\times (e, f). \end{align} \] Then we have \((a, b)\times ((c, d) + (e, f)) = (a, b)\times (c, d) + (a, b)\times (e, f)\). Hence, we can conclude that \(R\) forms a ring. Now, we want to show that \(R\) is an integral domain. Suppose that \((a, b), (c, d)\in \mathbb{Z}^2\) such that \((a, b)\times (c, d) = (0, 0)\). Then we have \[ \begin{align} (a, b)\times (c, d) &= (ac - bd, ad + bc) = (0, 0)\\ ac - bd &= 0\\ ad + bc &= 0. \end{align} \] Then we can have \[ \begin{align} d(ac - bd) &= acd - bd^2 = 0\\ c(ad + bc) &= acd + bc^2 = 0. (acd + bc^2 - acd + bd^2) = bc^2 + bd^2 = 0\\ b(c^2 + d^2) &= 0. \end{align} \] Since \(\mathbb{Z}\) is an integral domain, we have \(b = 0\) or \(c^2 + d^2 = 0\). If \(c^2 + d^2 = 0\) and both \(b\) and \(c\) are integers, we can have \(c = 0, d = 0\), which implies that \((c, d) = (0, 0)\). If \(c^2 + d^2\neq 0\), we have \(b = 0\). Moreover, we can conclude that \(c\neq 0\) or \(d\neq 0\). Without loss of generality, we assume that \(c\neq 0\). Then, we can have \[ ac - bd = ac = 0.\qquad ad + bc = ad = 0. \] Since \(c\neq 0\) and \(ac = 0\), we can say that \(a = 0\). Hence, we can get \((a, b) = 0\). Now, we want to show that \(R\) is an integral domain.

Question 6. Let \( E/F \) be an algebraic field extension, assuming the characteristic of \( F \) is \( p > 0 \). Let \( E' \) be the collection of elements \( x \in E \) such that \( x^q \in F \) for some power \( q = p^n \) (\(q\) may depend on \( x \)).

(a) Prove that \( E' \) is a field.

(b) Prove that the extension \([E : E']\) is separable.

Proof (a). Firstly, we can know that \(0\in E'\) since \(0\in E\) and \(0^q = 0\in F\) for any \(q = p^n\). For the similar reason, we can see that \(1\in E'\) since \(1\in E\) and \(1^q = 1\in F\) for any \(q = p^n\). Now, suppose that \(x\in E'\), which implies that \(x^q\in F\) for some \(q = p^m\). If \(p = 2\). Then, we can know that \(x + x = 2x = 0\), which implies that \(x = -x\). Hence, the additive inverse exists in \(E'\) for all \(x\in E'\) when \(p = 2\). If \(p\neq 2\), then \(p\) is an odd prime. Since we already have \(x\in E'\) where \(x^q\in F\) and \(q = p^m\), we can know that \(q\) is also an odd number. Then we have \((-x)^q = (-1)^qx^q = -x^q \in F\). Thus, we can know that the additive inverse exists in \(E'\) for all \(x\in E'\). Now, we want to show that \(x^{-1}\in E'\) as well. Given that \(x\in E'\), we have \(x\in E\). Since \(E\) is a field extension, we can know that \(x^{-1} = \frac{1}{x} \in E\). Since we know that \(x^{p^m}\in F\), we have \((x^{-1})^{p^m} = \frac{1}{x^{p^m}}\). Since \(x^{p^m}\in F\) and \(F\) is a field and \(x^{p^m}\neq 0\), we have \(\frac{1}{x^{p^m}}\in F\). Then, we can know that \(x^{-1}\in E'\). Hence, we can know that every element of \(E'\) has a multiplicative inverse. Given that \(E'\subset E\), we can see that \(E'\) inherit the operation properties from \(E\). Hence, we can know that \(E'\) is a field. \(\blacksquare\)

Proof (b). Firstly, if \(x\in F\) then we can have \(x\in E'\) since \(x^{p^0} = x\in F\). It shows that \(F\subset E'\). Since \(F\) has characteristic \(p\), we can know that \(E'\) has characteristic \(p\) as well. According to the following lemma:

Lemma 5.3.5 Let \( f \in F[x] \) be an irreducible polynomial of degree \( n \). Then \( f \) is separable if either of the following conditions is satisfied:

(a) \( F \) has characteristic \(0\), or (b) \( F \) has characteristic \( p > 0 \) and \( p \nmid n \).

Suppose that \(\alpha\in E'\) and the minimal polynomial of \(\alpha\) is \(f(x)\in E'[x]\). Suppose that \(f(x)\) has degree \(n\) and \(p\mid n\).