× Linear Transformation Multiplicity Dual Space Minimal Polynomial \(T\)-Complement Cyclic Operator Indecomposable Operator Characteristic Polynomial Diagonalizable Smith Normal Form Normal Subgroup Isomorphism Center Centralizer Normalizer Stabilizer Orbit \(p\)-group Sylow \(p\)-Subgroup Sylow's Theorem Simple Group Solvable Group Field Integral Domain Ascending Chain Condition Principal Ideal Domain Unique Factorization Domain Polynomial Ring Division Algorithm in SageMath Prime and Irreducible Adjoining Element Splitting Field Minimal Polynomial Field Extension Separable Algebraic Closure Galois Group Galois Theory KU 2015 (January) KU 2015 (August) KU 2016 (January) KU 2016 (August) KU 2017 (January) KU 2017 (August) KU 2018 (January) KU 2018 (August) KU 2019 (January) KU 2019 (August) KU 2020 (January) KU 2020 (August) KU 2021 (January) KU 2021 (August) KU 2022 (January) KU 2022 (August) KU 2023 (January) KU 2023 (August) KU 2024 (January)
☰ Menu

KU 2017 (August) ALGEBRA QUALIFYING EXAM

My name is Hanzhang Yin, and I have developed this website as a resource to facilitate the review of key concepts in abstract algebra, and it is not fully completed. Feel free to email me at hanyin@ku.edu if there are any errors or suggestions for improvement.

2. Let \( R \) be a unique factorization domain with quotient field \( K \). Write \( R[x] \) and \( K[x] \) respectively, for the polynomial rings over \( R \) and \( K \). Fix \( 0 \ne f(x) \in R[x] \). Let \( I \) denote the principal ideal \( f(x)R[x] \) and \( J \) denote the ideal \( f(x)K[x] \cap R[x] \). Prove that there exists \( 0 \ne a \in R \) such that \( I = aJ \), where \( aJ := \{ aj \mid j \in J \} \).

Proof. Firstly, we want to show that \(I\subset J\). Suppose that \(a\in I\), we can know that \(a\in f(x)K[x]\) since \(a\in f(x)R[x]\) and \(R[x]\subset K[x]\) for \(K\) is the quotient field of \(R\). Since \(I\) is an ideal of \(R[x]\), we can know that \(a\in R[x]\). Thus, we have \(a\in f(x)K[x]\) and \(a\in R[x]\), which implies that \(a\in J\). Hence, we have \(I\subset J\). Since \(K\) is a field, we can know that \(K[x]\) is a PID. And \(R[x]\) is UFD, for \(R\) is a UFD. Thus, we can know that \(J=(g(x))\) for some \(g(x)\in K[x]\). Since \(J\subset R[x]\), we can know that \(g(x)\in R[x]\) by contradiction. Since \(J\) is also a ideal of \(R[x]\), we can know that \(f(x)\mid g(x)\). Thus, we have \(g(x)=f(x)h(x)\) for some \(h(x)\in R[x]\). Since \(R[x]\) is a UFD, there exists only one \(h(x)\in R[x]\) such that \(g(x)h(x) = f(x)\). Since \[ I = \{f(x)r(x)\,|\, r(x)\in R[x]\} = \{g(x)h(x)r(x)\,|\, r(x)\in R[x]\} = \{h(x)(g(x)r(x))\,|\, r(x)\in R[x]\}, \] and \(g(x)r(x)\in J\), we have \(I\subset h(x)J\). For the other direction, we know that every \(j(x)\in J\), we have \(j(x) = g(x)r(x)\) for some \(r(x)\in R[x]\). Thus, we have \(h(x)\cdot g(x)r(x) = (g(x)h(x))r(x) = f(x)r(x) \in I\), which implies that \(h(x)J\subset I\). Hence, we can know that \(I = h(x)J\). Since we know that \(f(x)\neq 0\), we can know that \(h(x)\neq 0\) given that \(g(x)h(x) = f(x)\). Thus, we have \(I = h(x)J\neq 0\). Hence, we can know that there exists \(0\neq a\in R\) such that \(I = aJ\). \(\blacksquare\)

\(\textbf{Problem 6. }\) Consider the matrix \( A = \begin{bmatrix} 0 & 1 & 0 & 1 \\ -1 & 1 & 0 & 0 \\ -2 & 0 & -1 & -2 \\ 1 & -1 & 0 & 0 \\ \end{bmatrix} \), with entries in \( \mathbb{C} \). Find \( J \), the Jordan canonical form for \( A \) and an invertible matrix \( P \) such that \( J = P^{-1}AP \).

\(\textbf{Solution. }\)We firstly find the characteristic polynomial of \(A\). \[ \chi_A(x) = x^4 -x^2 = x^2(x+1)(x-1) = (x - 0)^2(x - 1)(x + 1). \] Now we try to determine the minimal polynomial of \(A\), which leaves us two possibilities: \(x^2(x-1)(x+1)\) and \(x(x-1)(x+1)\). We plug in \(A\) to \(x(x-1)(x+1)\) and find that it is not the zero matrix. Thus, we have the minimal polynomial is the same as the characteristic polynomial. Hence, we have the Jordan canonical form of \(A\) is \[ J = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{bmatrix}. \] Now we calculate the eigenvectors of \(A\). Then we have the eigenvectors of \(A\) are \(v_1 = (0, 1, 1, -1)^T\) for eigenvalue \(1\), \(v_2 = (0, 0, 1, 0)^T\) for eigenvalue \(1\), \(v_3 = (1, 1, 0, -1)^T\) for eigenvalue \(0\). We need to find another column vector of \(P\). Suppose that it is \((a, b, c, d)^T\) such that \[ A \begin{bmatrix} a \\ b \\ c \\ d \\ \end{bmatrix} = v_3 = \begin{bmatrix} 1 \\ 1 \\ 0 \\ -1 \\ \end{bmatrix}. \] Then we have \((a, b, c, d)^T = (1, 2, 0, -1)^T\). Hence, we have a matrix such that \[ B = \begin{bmatrix} 1 & 1 & 0 & 0 \\ 2 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ -1 & -1 & 0 & -1 \\ \end{bmatrix}. \] Then, we calculate the determinant of \(B\) and find that it is not zero. Hence, we have \(P = B\), where \(A = P^{-1}JP\).