Algebra (Artin) Ch.6 Problem 6.10

Problem statement: Let M be a matrix made up of two diagonal blocks: M = \begin{bmatrix} A & 0 \\ 0 & D \end{bmatrix} . Then M is diagonalizable if and only if A and D are diagonalizable.

Solution: Let A, D be n \times n matrices. Then M is an 2n \times 2n matrix. Let T be the linear operator and (u_1,..,u_n,u'_1,..,u'_n) be the basis of the vector space F^{2n}=V w.r.t the matrix M. Denote Sp({u_1,..,u_n}) by W and Sp({u'_1,..,u'_n}) by W’. Then W and W’ are T-invariant subspaces. Also, V is the direct sum of W and W'.

The if part:

Let M be diagonalizable. Then there is a basis of eigenvectors of the vector space F^{2n}. Let the basis be (v_1,..,v_{2n}). Then v_i = w_i + w'_i for some w_i \in W and w'_i \in W'. Then T(v_i)=\lambda_i v_i = \lambda_i w_i + \lambda_i w'_i = T(w_i) + T(w'_i).

Then T(w_i)-\lambda_i w_i + T(w'_i)-\lambda_i w'_i = 0. Since W and W’ are T-invariant, T(w_i)=\lambda_i w_i, T(w'_i)=\lambda_i w'_i . Then the nonzero w_i s and w'_i s are eigenvectors of W and W’ respectively. Also, since V is the direct sum of W and W’, and (v_1,..,v_{2n}) is a basis of V, (w_1,..,w_n) generates W and (w'_1,..,w'_n) generates W’. Then we can obtain a basis of eigenvectors for W and W’ by dropping the dependent vectors in (w_1,..,w_n) and (w'_1,..,w'_n). This essentially means that A and D are diagonalizable.

The only-if part:

As A, D are diagonalizable, let P^{-1}AP and Q^{-1}DQ be diagonal matrices where P,Q are invertible. Then \begin{bmatrix} P & 0 \\ 0 & I_n \end{bmatrix} \begin{bmatrix} I_n & 0 \\ 0 & Q \end{bmatrix} diagonalizes M.

Advertisements

Algebra (Artin) Ch.4 Exercise 6.8

Problem statement: A linear operator is nilpotent if some positive power T^k is zero. Prove that T is nilpotent if and only if there is a basis of V such that the matrix of T is upper triangular, with diagonal entries zero.

Solution: (assume that V is finite-dimensional) Let the matrix of T be A .

The if part:

Let k be the smallest positive integer such that T^k = 0. Then ker T \neq {0}, otherwise dim V=dim (Im T) , which would mean A is invertible, a contradiction.

Let us denote ker T^i by K_i , to save time.

If T^i(v)=0, then T^{i+1}(v)=0 , so K_i \subset K_{i+1} . We note that K_k=V .

Then, a basis for K_1 can be extended to a basis for K_2, which can be extended to a basis for K_3 and so on, eventually we shall obtain a basis for K_k , or V . Since k is the smallest integer such that T^k=0, we have K_i \neq K_j whenever i \neq j, 0<i,j<k. Let this basis be (u_11,..,u_1r_1,u_21,..,u_2r_2,...,u_k1,..,u_kr_k), where  (u_11,..,u_ir_i) is a basis for K_i .

Let M be the matrix of T with respect to this basis. Then the first r_1 columns of M are zero, the next r_2 columns have the last n-r_1 entries zero, the next r_3 columns have the last n - (r_1+r_2) entries zero and so on. This is because T(K_{i+1}) \subset K_i . Clearly, M is an upper triangular matrix. We would get a lower triangular matrix if we wrote the basis vectors in the opposite order.

Only if part:

Let A be upper triangular and (v_1,..,v_n) be the basis w.r.t which the matrix of T is A. Then T(v_1)=0, T(v_2)=a_12 v_1, so T^2(v_2)=0, proceeding this way, we see that T^i(v_i)=0, i=1,2,..,n, so T^n=0.

Michael Artin’s Algebra Ch.3 M.3(c)

Problem Statement: prove that every pair x(t), y(t) of real polynomials satisfies some real polynomial relation f(x,y)=0.

Solution sketch:

Let, without loss of generality, x=a_0 + a_{1} t + a_{2} t^2+...+a_{n} t^n, a_{n} \neq 0, y = b_0 + b_{1} t +...+ b_{m} t^m, b_{m} \neq 0, m \le n.

Then t^{m} can be written as a linear combination of  y, t, t^{2},.., t^{m-1} . So, in x(t) , we write t^{k} as t^{m}t^{k-m} when m \le k , and replace t^{m} by y - (b_0 + b_{1} t +...+ b_{m-1} t^{m-1}. By repeating this process as long as there are powers of t greater than or equal to m in x(t) , we shall eventually find an equation where the highest power of t is no greater than m-1 . Now let the highest power of t in this equation be r . Then in a similar fashion, we can plug in the value of t^{r} in y(t) to obtain an equation where the power of t will not exceed r - 1 . By repeating this process, we shall eventually obtain an equation where the power of t is zero, and that is our desired f(x,y)=0.

Michael Artin’s Algebra Ch.1 M.1

(I have started reading Michael Artin’s Algebra and I think uploading solutions of some problems, especially some of the starred ones, might be useful to others like me who are currently self-studying mathematics. If I have obtained the solution from some source, I will mention the source with the solution. Pointing out errors is always very welcome.)

Problem Statement: M= \begin{bmatrix} A & B \\ C & D \end{bmatrix} where each block is an n \times n matrix. Suppose that A is invertible and that AC=CA . Use block multiplication to prove that det M=det(AD-CB) .

Solution:

We note that G=\begin{bmatrix} 0 & I_n \\ A & -C \end{bmatrix} is invertible, and by expanding the determinant along its first row, we see that det G = det A .

Then GM= \begin{bmatrix} A & B \\ 0 & AD-CB \end{bmatrix}.

Take H=\begin{bmatrix} A^{-1} & 0 \\ 0 & I_n \end{bmatrix}.

Then HGM = \begin{bmatrix} I_n & A^{-1}B \\ 0 & AD-CB \end{bmatrix}.

Clearly, det M = det (HGM). Expanding HGM along its first column, we get det (HGM) = det (AD - CB).