$\newcommand{\nnn}{\mathbb N}$ $\newcommand{\zzz}{\mathbb Z}$ $\newcommand{\rrr}{\mathbb R}$ $\newcommand{\ccc}{\mathbb C}$ $\newcommand{\eee}{\mathrm e}$ $\newcommand{\vf}{\varphi}$ $\newcommand{\fib}{\mathrm{Fib}}$ $\DeclareMathOperator{\div}{\mathrm Div}$ $\DeclareMathOperator{\gcd}{\mathrm gcd}$ $\DeclareMathOperator{\lcm}{\mathrm lcm}$ $\DeclareMathOperator{\var}{\mathrm Var}$ $\DeclareMathOperator{\period}{\mathrm per}$ $\DeclareMathOperator{\rank}{\mathrm rk}$ $\DeclareMathOperator{\trace}{\mathrm Tr}$ $\DeclareMathOperator{\Li}{\mathrm Li}$ $\DeclareMathOperator{\span}{\mathrm Span}$ $\DeclareMathOperator{\hom}{Hom}$ $\DeclareMathOperator{\sgn}{sgn}$ $\DeclareMathOperator{\inv}{Inv}$ $\DeclareMathOperator{\im}{Im}$ $\DeclareMathOperator{\id}{id}$ $\DeclareMathOperator{\diag}{diag}$

CMSC 37110-1: Discrete Mathematics

Autumn 2012

Current Homework assignments


Course home | Policy on collaboration | Old Homework | #12 | #13 | #14 | #15 | #16

PRESS "REFRESH" to find the latest version!

Last updated Thu, Dec 6, 3:30pm



The notation "LN 4.1.15" refers to the Instructor's Lecture Notes, Chapter 4, problem 4.1.15.

The notation "MN" refers to the Matoušek--Nešetříl text (second edition). The first edition is also entirely adequate for this class. Note that a new Chapter 2 has been inserted in the second edition so for $i\ge 3$ if you can find something in Chapter $i$ in the second edition, it will appear in Chapter $i-1$ in the first edition.

The notation "LIN" refers to the instructor's Linear Algebra notes, currently in progress.

The notation "PZ" refers to the puzzle problem set from the instructor's 2011 REU course.

"DO" problems are strongly recommended to check your understanding of the concepts; do not hand them in.

Problems labeled "HW" must be handed in at the beginning of the next class unless expressly stated otherwise.

BONUS PROBLEMS are not mandatory but the same deadline applies as to ordinary homework.

CHALLENGE PROBLEMS have no point value (but they earn you the instructor's attention) and no deadline (except they expire when discussed in class). Make sure you hand them in on a separate sheet, state your name, the fact that this is a challenge problem solution, and indicate the problem being solved. One further request: if you hand in a solution to a challenge problem, send email to the instructor giving the problem number and a brief indication of the problem (copied from the problem announcement: like "n+1 numbers" or "another limit related to the def of "e"" -- no detailed description needed, this suffices to remind me which problem it was). Such email will create an easy record and help avoid mistakes in bookkeeping.

REWARD PROBLEMS are challenge problems that you should NOT hand in. The solution is its own reward. No deadline. Of course you are welcome to discuss your thoughts with the TA, the instructor, and your peers.

If you hand in material before its due date, make sure you put it on a separate sheet since solutions are graded in batches by their due dates.

Go to top


Homework set #17 (due Tue Dec 11) (posted Dec 6 noon). These are the problems and results stated in class on Dec 6.

17.1 DO: Prove the Division Theorem for polynomials over the field $F$ ($F =\rrr$ or $\ccc$):
         $(\forall f,g\in F[x])($ if $g\neq 0$ then $(\exists q,r\in F[x])(f = qg+r$ and $\deg r < \deg g).$

17.2 DO: Define the gcd of polynomials in $F[x]$. Prove they are unique up to nonzero scalar factors. Define Euclid's algorithm for polynomials; prove that gcd of polynomials exists and can be written as a linear combination (where the coefficients themselves are polynomials). (Hint: Use the Division Theorem; copy the proof learned in Number Theory. This will be a good exercise to review your Number Theory.)

17.3 DO: Recall that the minimal polynomial $m_A$ of the matrix $A\in M_n(F)$ is the lowest-degree polynomial $g\in F[t]$ with leading coefficient 1 such that $g(A)=0$. Prove: $$(\forall f\in f[x])(f(A)=0 \Leftrightarrow m_A \mid f).$$ Hint: Use the Division Theorem.

17.4 DO: Prove that $\deg m_A \le n^2$. Your proof should not use any major theorems, just the notion of linear independence.

17.5 DO: Let $D=\diag(\lambda_1,\dots,\lambda_n)$ denote the $n\times n$ diagonal matrix of which the diagonal entries are $\lambda_1,\dots, \lambda_n$. Let $f\in F[x]$. Prove: $f(D) = \diag(f(\lambda_1),\dots,f(\lambda_n))$.

17.6 DO: Prove: for all $A\in M_n(F)$ and for all invertible matrices $S\in M_n(F)$ and for all polynomials $f\in F[x]$ we have $f(S^{-1}AS) = S^{-1}f(A)S$.

17.7 DO: Prove: if $A\sim B$ (similar matrices) then $m_A = m_B$.

17.8 DO*: Prove that the diagonalizable matrices are dense in $M_n(\ccc)$, i.e., $(\forall \epsilon > 0)(\forall A\in M_n(\ccc))(\exists B\in M_n(\ccc)$ such that $B$ is diagonalizable and every entry of $B-A$ has absolute value $\le \epsilon$.

17.9 DO: Recall the Cayley - Hamilton Theorem: if $f_A$ is the characteristic polynomial of $A\in M_n(\ccc)$ then $f_A(A)=0$. (a) Prove that this is equivalent to saying that $m_A \mid f_A$. (b) Infer that $\deg m_A \le n$.

17.10 DO: Prove: $\lambda\in \ccc$ is an eigenvalue of $A\in M_n(\ccc)$ if and only if $m_A(\lambda)=1$. Use the Cayley - Hamilton Theorem for the "if" part.

17.11 DO: (a) Determine the minimal polynomial of a diagonal matrix. Observe that it has no multiple roots. (b) Prove that the minimal polynomial of a diagonalizable matrix has no multiple roots. (c) CHALLENGE: Prove: a matrix $A\in M_n(\ccc)$ is diagonalizable if and only if $m_A$ has no multiple roots.

17.12 DO: The polynomial $f\in \ccc[x]$ has no multiple roots in $\ccc$ if and only if $\gcd(f,f')=1$.

17.13 DO: Prove the Cayley - Hamilton Theorem (a) for diagonal matrices (b) for diagonalizable matrices (c) for all complex matrices. (Hint: for (a) use problem 17.5; for (b) use 17.6, 17.7; for (c) use 17.8.)

17.14 DO: Let $T$ be the transition matrix of a finite Markov Chain and $G$ the digraph of possible transitions. Assume the MC is irreducible, i.e., $G$ is strongly connected. Let $d$ be the period of $G$. Let $\omega$ be a complex $d$-th root of unity, i.e., $\omega^d=1$. Prove: if $\lambda\in \ccc$ is an eigenvalue of $A$ then $\lambda\omega$ is also an eigenvalue of $A$.

17.15. Recall the Perron - Frobenius Theorem on nonnegative matrices. Let $A\in M_n(\rrr)$, $A=(\alpha_{ij})$, such that $\forall i,j)(\alpha_{ij}\ge 0$. Let $G$ be the corresponding directed graph with vertex set $[n]$; we have an $i\to j$ adjacency if $\alpha_{ij}>0$. Below, items (a), (b), (c) constitute parts of the Perron - Frobenius Theorem, and (a1), (b1), (b2), (c1) are related exercises you can solve without using the Perron - Frobenius Theorem.
(a) Then $A$ has a nonnegative real eigenvector. Let $r$ denote the corresponding eigenvalue. (Prove: $r\in\rrr$ and $r\ge 0.$) Moreover, this eigenvector can be chosen so that for all eigenvalues $\lambda\in\ccc$ of $A$ we have $|\lambda|\le r$.
(a1) Infer from part (a) that every finite Markov Chain has a stationary distribution. What is $r$ for a stochastic matrix?
(b) Assume $G$ is strongly connected. (In this case, the matrix $A$ is called "irreducible.") Then the algebraic multiplicity of the eigenvalue $r$ is 1 and the algebraic multiplicity of all eigenvalues with absolute value $r$ is also 1.
(b1) Prove that if $A$ is irreducible and $x$ is a nonnegative eigenvector then all coordinates of $x$ are positive.
(b2) Prove that that $x$ is unique (up to scaling). (Use (b1) to prove (b2).) This will prove that the eigenvalue $r$ has geometric multiplicity 1, which is weaker than what (b) states.
(c) Moreover, if $\lambda\in \ccc$ and $|\lambda|=r$ then $\lambda$ is an eigenvalue if and only if $(\lambda/r)^d=1$ where $d$ is the period of $G$.
(c1) Prove: if $G$ is strongly connected and $\lambda$ is an eigenvalue of $A$ then $\lambda\omega$ is also an eigenvalue of $A$ for all $d$-th roots of unity $\omega$ where $d$ is the period of $G$. (This proves the easy direction of (c).

17.16. Infer from the Perron-Frobenius Theorem: Let $T$ be a stochastic matrix with eigenvalues $\lambda_1,\dots,\lambda_n\in \ccc$ where $\lambda_1=1$. If the Markov Chain defined by the transition matrix $T$ is ergodic then $(\forall i\ge 2)(|\lambda_i| < 1)$.

17.17*. Prove: every matrix $A\in M_n(\ccc)$ is similar to a triangular matrix.

17.18*. Prove: if $T$ is the transition matrix of an ergodic Markov Chain then $\lim_{t\to\infty} T^t$ exists. (Hint. Use 17.16 and 17.17.)

17.19. A matrix $A$ is doubly stochastic if both $A$ and its transpose are stochastic, i.e., $A$ is a nonnegative matrix such that its rows sums as well as its column sums are 1). Let $T$ be the transition matrix of a Markov Chain. Prove: the uniform distribution is stationary if and only if $T$ is doubly stochastic.

17.20. Prove: If $x$ is an eigenvector of the matrix $A\in M_n(\ccc)$ to eigenvalue $\lambda$ then $x$ is an eigenvector of the matrix $A^k$ to eigenvalue $\lambda^k$ and more generally, if $f\in \ccc[t]$ is a polynomial then $x$ is an eigenvalue of the matrix $f(A)$ to eigenvalue $f(\lambda)$.

17.21. Let $A=(\alpha_{ij})\in\ccc^{k\times n}$. Prove: $(\forall i,j)(||A||\ge |\alpha_{ij}|$.

17.22. Consider a Markov Chain with a symmetric transition matrix $T$. (For instance, the naive random walk on a regular graph.) Let $G$ be the corresponding transition graph. (So this graph is necessarily undirected; loops are permitted.) Let $\lambda_1\ge \lambda_2 \ge \dots \ge \lambda_n$ be the eigenvalues of $T$. Prove:
(a) $\lambda_1 =1.$  
(b) $\lambda_n \ge -1.$  
(c) $\lambda_2 < 1$ if and only if $G$ is connected.  
(d) Assume $G$ is connected. Then $\lambda_n = -1$ if and only if $G$ is bipartite.

17.22.(Rapid convergence). In class we proved the following theorem. Consider a Markov Chain with a symmetric transition matrix $T$. Using the notation of the preceding exercise, let $$\lambda = \max_{2\le i\le n} |\lambda_i|.$$ (So by the preceding exercise, $|\lambda|<1$ if and only if the Markov Chain is ergodic.) Let $T^t = \left(p_{ij}^{(t)}\right)$. So $p_{ij}^{(t)} = P(X_{s+t}=j \mid X_s = i)$ ($t$-step transition probabilities).
Theorem. $$\left|p_{ij}^{(t)}-\frac{1}{n}\right| \le \lambda^t.$$

In fact, we proved the stronger statement that
        $||T^t - (1/n)J|| \le \lambda^t$,
where $J$ is the $n\times n$ all-ones matrix. (Why does the Theorem follow from this? Check the exercises above.) To prove this statement, we need to prove that none of the eigenvalues of $T^t - (1/n)J$ exceed $\lambda^t$ in absolute value.

Let $e_1 = (1/\sqrt{n})(1,1,1,\dots,1)^T$ (the all-ones column vector scaled to norm 1). This is an eigenvector to eigenvalue 1 for $T$. Let us extend this to an orthonormal eigenbasis of $A$:   $e_1,\dots, e_n$. We claim that $e_1,\dots, e_n$ form an eigenbasis of $T^t-(1/n)J$ with corresponding eigenvalues $0, \lambda_2^t,\lambda_3^t,\dots,\lambda_n^t$. This will complete the proof of the Theorem. For the proof of the claim we note that $e_2,\dots,e_n$ are orthogonal to the all-ones vector and therefore $Je_i=0$ for $i=2,\dots,n$. The rest now follows from exercise 17.20. (Verify!)

Go to top


Homework set #16 (due Thu Dec 6) (posted Dec 4 noon)

16.1 DO: Study LN Chapter 8 (Finite Markov Chains)

16.2 DO: Solve exercises LN 8.1.1 - 8.1.35 and 8.2.1 - 8.2.2. Some of these will be highlighted below.

16.3 DO: Find a weakly connected finite Markov Chain with more than one stationary distribution. Your example should have as few states as possible.

16.4 DO: Prove that for any square matrix $A$ and any scalar $\lambda$, the left and right geometric multiplicities of $\lambda$ are equal.

16.5 DO: Prove: If $\lambda$ is a real eigenvalue of a stochastic matrix then $\lambda \le 1$.

16.6 DO: Let $G=(V,E)$ be a digraph. A directed cut is a partition of $V$ into $V = A \dot\cup B$ such that $A, B$ are not empty and there is no edge from $B$ to $A$. Prove: a digraph is strongly connected if and only if it does not have a directed cut.

16.7 DO*: Recall: A finite Markov Chain is irreducible if the digraph of possible transitions is strongly connected. Prove: An irreducible Markov Chain has a unique stationary distribution. (Prove uniqueness only.) Your proof should be direct, do not rely on consequences of the Perron-Frobenius Theorem. Hint: prove the stronger statement that the eigenvalue 1 has (left) geometric multiplicity 1. Use exercise 16.4 so you need to prove the same about the right geometric multiplicity. Now review the proof of Ex. 16.5 and see what you can say about the coordinates of a right eigenvector to eigenvalue $\lambda =1$; show all coordinates must be equal.

16.8 DO: Prove: there exists a reducible Markov Chain which has a unique stationary distribution.

16.9 DO: Let $T$ be a stochastic matrix and suppose the limit $L = \lim_{k\to\infty} T^k$ exists. Prove: each row of $L$ is a stationary distribution.

16.10 DO: Suppose $\pi=(\pi_1,\dots,\pi_n)$ is a stationary distribution for an irreducible Markov Chain. Prove: $(\forall i)(\pi_i > 0)$.

16.11 DO: Let $T$ be a stochastic matrix. Prove: $T$ is irreducible if and only if for all sufficiently large $k$ the matrix $(T+I)^k$ is positive (i.e., all of its entries are positive). ($I$ denotes the identity matrix.)

16.12 DO: Let $a_1,\dots,a_m$ be nonnegative integers. Prove: if $\gcd(a_1,\dots,a_m)=1$ then all sufficiently large integers can be written in the form $a_1x_1+\dots+a_mx_m$ where all the $x_i$ are nonnegative integers.

16.13 DO: LN 8.1.17. (Prove that the powers of a certain 2 × 2 stochastic matrix do not converge; find a stationary distribution.)

16.14 DO: Let $T$ be a stochastic matrix. Prove: $T$ is ergodic (irreducible and aperiodic) if and only if for all sufficiently large $k$ the matrix $T^k$ is positive.

16.15 DO (convergence of a finite Markov Chain): LN 8.1.8. (Prove the convergence of the powers of a certain 2 × 2 stochastic matrix.) State the limit matrix. -- Your proof should be short and direct (without reference to consequences of Perron - Frobenius). Elegance counts.

16.16 DO: Prove: if a digraph is strongly connected then its period is equal to the gcd of the lengths of its cycles.

16.17 DO: Let $A$ be an $n \times n$ complex matrix and let $G = ([n], E)$ be the associated digraph ($ i\to j$ exactly if $a_{ij}\neq 0$). Suppose $G$ is strongly connected and has period $d$. Prove: if $\lambda$ is an eigenvalue of $A$ and $\omega$ is a complex $d$-th root of unity (i.e., $\omega^d = 1$) then $\lambda\omega$ is also an eigenvalue.

Go to top


Homework set #15: (first batch posted 12-1 4pm, the rest posted 12-1 6:45pm). (REFRESH!) Problems due December 4. Please also remember that all problems from problem set #14 are now due. (Note: all problems left over from Problem set #14 are "DO" exercises, so you are expected to solve them but do not turn them in.)

Errors corrected on 12-3, 12:15pm:

There were two sets of problems numbered 15.21 and 15.22. The first pair were "DO" exercises, the second "HW" problems. The two affected "DO" exercises are now numbered 15.20.1 and 15.20.2. They are followed by the "HW" problems 15.21 and 15.22, as ofiginally numbered.

HW problem 15.22 included the equation $f(x)=0$. This has been corrected to $f(x) = 1$.
(END ERROR LIST)

15.1 DO: Redo the problems of the Third Quiz.

15.2 HW: Problem 7 from Quiz-3 (evaluation of an $n\times n$ determinant).   (8 points)

15.3 HW: Problem 8 from Quiz-3 (eigenvalues of the adacency matrix of a bipartite graph)   (6 points)

15.4 HW: Problem 9 from Quiz-3 (left vs right eigenvector)   (3 points)

15.5 CHALLENGE: Problem 10 from Quiz-3 (pairwise independce vs linear independence of random variables)

15.6 CHALLENGE: Problem 11 from Quiz-3 (rank of elementwise square of a matrix)

In exercises 15.7 - 15.12, we shall consider vector spaces and matrices over the field $F$, where $F=\rrr$ or $F=\ccc$. ($F$ is the set of scalars.)

15.7 DO (kernel, image) Let $\vf\,:\,V\to W$ be a linear map. Recall that $\ker\vf= \{x\in V \mid \vf(x)=0\}$ and $\im\vf= \{\vf(x) \mid x\in V\}$. In other words, $\ker\vf$ is the inverse image of $0_W$ and $\im\vf$ is the range of $\vf$. Prove: $\ker\vf \le V$ and $\im\vf\le W$.

15.8 DO (Rank-Nullity Theorem): Let $\vf\,:\,V\to W$ be a linear map. Prove: $\dim\ker\vf +\dim\im\vf = \dim V$.
Terminology: $\dim\im\vf$ is called the rank of $\vf$ and $\dim\ker\vf$ is the nullity of $\vf$.

15.9 DO: Let $A\in F^{k\times n}$ (a $k\times n$ matrix over the field $F$). Let $x=(x_1,\dots,x_n)^T\in F^n$ be an unknown column vector. Recall that the equation $Ax=0$ represents a system of $k$ homogeneous linear equations in $n$ unknowns. Let $U=\{x\in F^n\mid Ax=0\}$; this is the set of solutions to $Ax=0$. Prove that $U\le F^n$ and $\dim U = n - \rank(A)$. Hint: Rank-Nullity Theorem.

15.10 DO: Let $\vf\,:\,V\to V$ be a linear transformation. For $\lambda\in F$, let $U_{\lambda}=\ker(\lambda \id - \vf)$ where $\id$ is the identity transformation of $V$, i.e., $(\forall x\in V)(\id(x)=x)$. Note that $U_{\lambda}$ consists of all eigenvectors of $\vf$ to the eigenvalue $\lambda$ and the zero vector. $U_{\lambda}$ is called the eigensubspace to eigenvalue $\lambda$.   (a) Prove that $U_{\lambda}\le V$.   (b) We say that $\dim U_{\lambda}$ is the geometric multiplicity of eigenvalue $\lambda$. Prove that $\lambda\in F$ is an eigenvalue if and only if its geometric multiplicity is not zero.   (c) Prove: the geometric multiplicity of $\lambda$ is $n - \rank(\lambda\id - \vf)$ where $n = \dim V$. (d) Rephrase and prove all the above for a matrix $A\in M_n(F)$ (an $n\times n$ matrix over $F$) in the place of $\vf$.

15.11 DO (algebraic vs geometric multiplicity) Let $A\in M_n(\ccc)$. We say that the algebraic multiplicity of the eigenvalue $\lambda$ is its multiplicity in the characteristic polynomial. (So for example if the characteristic polynomial is $f_A(t)=(t-1)^2(t-5)^3$ then the algebraic multiplicity of $\lambda=5$ is 3.) Prove:
(a) $(\forall\lambda\in\ccc)($alg.mult.$(\lambda)\ge$ geom.mult.$(\lambda))$.
(b) $(\forall\lambda\in\ccc)($alg.mult.$(\lambda) =$ geom.mult.$(\lambda))$ if and only if $A$ is diagonalizable.

15.12 HW: Determine the algebraic and the geometric multiplicities of each eigenvalue of the triangular matrix $$ \left(\matrix{7 & -2 & 4 & 1 & 7 \\ 0 & -2 & 1 & 0 & 2 \\ 0 & 0 & 7 & 1 & 0\\ 0 & 0 & 0 & -2 & 0 \\ 0 & 0 & 0 & 0 & 7} \right).$$ Show your work!   (6 points)

For the problems below, the field is always the real numbers: $F=\rrr$.

15.13 DO: Recall the standard dot product over $\rrr^n$: if $x = (x_1,\dots,x_n)^T\in \rrr^n$ and $y=(y_1,\dots,y_n)^T\in\rrr^n$ then $x\cdot y = x^T y = \sum_{i=1}^n x_iy_i$. The dot product is

positive definite: $(\forall x)(x\cdot x \ge 0)$ and $x\cdot x = 0$ if and only if $x=0$. The euclidean norm of the vector $x\in \rrr^n$ is $||x|| = \sqrt{x\cdot x}$. Prove:
(a) (Cauchy-Schwarz inequality): $|x\cdot y| \le ||x||\cdot||y||.$

(b) (Triangle inequality) ||x+y||\le ||x|| + ||y||.$

(c) (Angles) Define the angle $\theta$ between the nonzero vectors $x,y\in\rrr^n$ such that $x\cdot y = ||x||\cdot ||y|\cdot\cos \theta$. (Why does such a $\theta$ always exist?)

15.14 HW (orthogonality vs linear independence) Recall thate the vectors $x,y\in\rrr^n$ are orthogonal if $x\cdot y = 0$. We say that a list of vectors is orthogonal if the vectors in the list are pairwise orthogonal. Prove: if $v_1,\dots,v_k$ are orthogonal and none of them is the zero vector then $v_1,\dots,v_k$ are linearly independent.   (5 points)

15.15 DO: Consider the sequence $1, \cos t, \sin t, \cos 2t, \sin 2t, \cos 3t, \sin 3t, \dots$ of real functions. Prove that they are linearly independent. Hint: Prove that these functions are orthogonal if we replace the "dot product" by the following positive definite "inner product" in the definition of orthogonality:    $f\cdot g = \int_0^{2\pi} f(t)g(t)dt.$

15.16 DO (orthonormal basis): A list of vectors is orthonormal if they are orthogonal and each of them has unit norm: $||v_i||=1$. Note that every list of nonzero orthonogal vectors can be turned into and orthonormal list by scaling each vector.
Prove that every subspace of $\rrr^n$ has an orthonormal basis.

15.17 DO: Prove that every 2-dimensional subspace of $\rrr^n$ is isometric to the 2-dimensional geometric space. (An isometry is a distance-preserving bijection. The distance between vectors $x$ and $y$ is $||x-y||$.) Hint: Use the preceding exercise.

15.18 DO: Consider the triangle with vertices $x,y,z\in\rrr^n$. Find the sidelengths and angles of this triangle. Prove that these data satisfies the law of sines and the law of cosines. (Hint: use the preceding exercise.)

15.19: The Spectral Theorem: If $A$ is a symmetric real matrix then $A$ has an orthonormal eigenbasis. ($A\in M_n(\rrr)$ is symmetric if $A=A^T$.) In particular, all eigenvalues of such a matrix are real (the characteristic polynomial factors into linear terms) and the matrix is diagonalizable over the reals.

HW 15.20. Find the eigenvalues and an eigenbasis of the symmetric real matrix $$ \left(\matrix{ 0 & 1 \\ 1 & 1 }\right).$$ Note that the eigenvalues are real. Verify that the eigenvectors are orthogonal. (Show your work. Find exact values, not decimal approximations. Do not use web sources.)   (6 points)

15.20.1 DO: Let $A=(\alpha_{ij})\in M_n(\rrr)$ and $x=(x_1,\dots,x_n)^T\in\rrr^n$.   (a) Show that $x^TAx = \sum_{i=1}^n\sum_{j=1}^n \alpha_{ij}x_ix_j$. This expression is called a quadratic form (homogeneous quadratic function) of the real variables $x_1,\dots,x_n$.
(a) Prove that for every $A\in M_n(\rrr)$ there is a unique symmetric matrix $B\in M_n(\rrr)$ such that $(\forall x)(x^TAx=x^TBx)$.

15.20.2 DO: Let $A\in M_n(\rrr)$ be a symmetric real matrix. Let $u_1,\dots,u_n$ be an orthonormal eigenbasis, and $Au_i =\lambda_i u_i$. Let the coordinates of the vector $x\in\rrr^n$ with respect to the basis ${\underline u}=(u_1,\dots, u_n)$ be $\zeta_1,\dots,\zeta_n$. Prove: $$x^TAx = \sum_{i=1}^n \lambda_i\zeta_i^2 .$$

HW 15.21: A real symmetric matrix $A\in M_n(\rrr)$ is positive semidefinite if $(\forall x\in\rrr^n)(x^TAx \ge 0).$ Prove:
(a) The real symmetric matrix $A$ is positive semidefinite if and only if all of its eigenvalues are nonnegative. (Hint: use the preceding exercise.)
(b) Prove that for any matrix $B\in\rrr^{k\times n}$, the matrix $A = B^TB$ is positive semidefinite. (Hint: use the definition of positive semidefiniteness directly, no eigenvalues needed for this exercise.)   (6+6 points)

HW 15.22: Consider the function $f(x_1,x_2,x_3)= x_1^2 + 5x_2^2 + 9 x_3^2 - 3x_1x_2 - 5 x_1x_3 + 8x_2x_3$. This is a quadratic form in three variables.
(a) Write $f(x)$ as $x^TAx$ for a symmetrix matrix $A\in M_3(\rrr)$ (where $x=(x_1,x_2,x_3)^T$). Show the matrix $A$.
(b) Determine the eigenvalues of $A$ to three decimal digits of accuracy using an internet source (google "online eigenvalue calculator").
(c) Show that the points satisfying the equation $f(x)=1$ form an ellipsoid in $\rrr^3$ with principal half-axes $\sqrt{1/\lambda_1}, \sqrt{1/\lambda_2}, \sqrt{1/\lambda_3}.$ (Note: this is why the Spectral Theorem is also referred to as the "Principal Axis Theorem.") Note: an ellipsoid with half-axes $\alpha_1, \alpha_2, \alpha_3$ is described by the equation $\sum \zeta_i^2/\alpha_i^2 = 1$ where $\zeta_1,\zeta_2,\zeta_3$ are the coordinates of a point with respect to some orthonormal basis.
(d) Prove: for a symmetric real matrix $A\in M_3(\rrr)$, the equation $x^TAx = 1$ describes an ellipsoid if and only if all eigenvalues of $A$ are positive. (Hint for the "only if" part: show that there is no solution if all eigenvalues are negative; and if some of them are positive and some of them are negative then the set of solutions is unbounded (solutions can have arbitrarily large norm) and therefore cannot be an ellipsoid.)   (2+2+3+5 points)

15.23 HW (operator norm): For $A\in \rrr^{k\times n}$ we define the norm of $||A||$ as $$ ||A|| = \max_{\substack{x\in\rrr^n\\ x\neq 0}} \frac{||Ax||}{||x||}.$$ (The maximum is over all nonzero vectors.)
(a) Prove: If $A\in M_n(\rrr)$ is a symmetric real matrix with eigenvalues $\lambda_1\ge \lambda_2\ge\dots\ge \lambda_n$ then $||A||=\max_i |\lambda_i|$.
(b) If $B \in \rrr^{k\times n}$ then let $A=B^TB$. By Ex. 15.21 this matrix is symmetric and all its eigenvalues are nonnegative. Let $\lambda_1\ge\dots\ge\lambda_n\ge 0$ be the eigenvalues of $A$. Prove: $||B|| = \sqrt{\lambda_1}$.   (7+5 points)

15.24 DO (Rayleigh Quotient): Let $A\in M_n(\rrr)$ be symmetric. For $x\in\rrr^n$ let $$R_A(x) = \frac{x^TAx}{x^Tx}.$$ This is called the Rayleigh quotient associated with $A$.
Let $\lambda_1\ge \dots \ge \lambda_n$ be the eigenvalues of $A$. Prove:
       $\lambda_1 = \max_x R_A(x)$    and    $\lambda_n = \min_x R_A(x)$,
where the maximum and minimum are taken over all nonzero vectors $x\in\rrr^n$.

15.25 DO: Let $A$ be the adjacency matrix of a graph with $n$ vertices. (So $A$ is a symmetric real $n\times n$ matrix.) Let the eigenvalues of $A$ be $\lambda_1\ge\dots\ge \lambda_n$. Prove:
(a) $(\forall i)(|\lambda_i|\le \deg_{\max})$ (the maximum degree)
(b) $\lambda_1\le $ the average degree. (Note: the average degree is $2m/n$ where $m$ is the number of edges.) (Hint: Rayleigh's theorem.)
(c) $\sum_{i=1}^n \lambda_i^2 = 2m.$

15.26 CHALLENGE (Courant-Fischer): Let $A$ be a symmetric $n\times n$ real matrix with eigenvalues $\lambda_1\ge\dots\lambda_n$. Prove: $$ \lambda_i = \max_{\substack{U\le \rrr^n \\ \dim U = i}} \min_{\substack{x\in U \\ x\neq 0}} R_A(x)$$ where $R_A(x)$ is the Rayleigh quotient.

15.27 CHALLENGE (Interlacing Theorem): Let $A$ be a symmetric $n\times n$ real matrix with eigenvalues $\lambda_1\ge\dots\lambda_n$. Let us remove the last row and the last column of $A$; the resulting $(n-1)\times (n-1)$ matrix $B$ is again symmetric. Let the eigenvalues of $B$ be $\mu_1\ge\dots\ge \mu_{n-1}$. Prove: $$ \lambda_1\ge\mu_1\ge\lambda_2\ge\mu_2\ge\dots\ge\mu_{n-1}\ge\lambda_n.$$ (Hint: Courant-Fischer.) (Note: the same is true if we delete the $i$-th row and the $i$-th column.)

15.28 DO: Find the eigenvalues and an eigenbasis for the $n\times n$ all-ones matrix $J$ (all entries are 1). What is the rank and nullity of this matrix? (Hint: the vectors in the kernel are eigenvectors -- to what eigenvalue?)

15.29 DO: A matrix $A=(\alpha_{ij})\in M_n(\rrr)$ is stochastic if $(\forall i,j)(\alpha_{ij}\ge 0)$ and $(\forall i)(\sum_{j=1}^n \alpha_{ij}=1)$ (the matrix is nonnegative and each row sum is equal to 1).
Prove: if $\lambda\in\ccc$ is an eigenvalue of a stochastic matrix then $|\lambda|\le 1$.

Go to top


Homework set #14. Due November 29/Dec 4 in the following sense. Please understand all statements by November 29. Solve those not marked by + or * by November 29. Solve those marked by + by December 4. Those marked by * are difficult, you don't need to solve them, but you still need to be aware of them. (Posted Nov 27, 11pm)

14.1 DO: Let $V$, $W$ be vector spaces over $\rrr$. Recall that a function $\vf\,:\, V\to W$ is a linear map if $(\forall x,y\in)(\vf(x+y)=\vf(x)+\vf(y))$ and $(\forall x\in V)(\forall \alpha\in\rrr)(\vf(\alpha x)=\alpha \vf(x))$. (a) Prove: $\vf(0)=0$. (b) Prove: if $v_1,\dots,v_r$ are linearly dependent then $\vf(v_1),\dots,\vf(v_r)$ are also linearly dependent. (c) The converse of (b) is false (disprove it!).

14.2 DO: Prove: if the linear map $\vf\,:\,V\to W$ is a bijection then $\vf^{-1}$ is also linear. Such a linear map is called an isomorphism. We say that $V$ and $W$ are isomorphic if there exists a $V\to W$ isomorphims. This circumstance is denoted by $V\cong W$. Prove that isomorphism is an equivalence relation among vector spaces.

14.3 DO: Prove that two vector spaces (over $\rrr$) are isomorphic if and only if they have the same dimension.

14.4 DO: Let $e_1,\dots,e_n$ be a basis of the vector space $V$ and $w_1,\dots,w_n$ be arbitrary vectors in $W$. Prove: there exists a unique linear map $\vf\,:\,V\to W$ such that $(\forall i)(\vf(e_i)=f_i)$.

14.5 DO: Let $\hom(V,W)$ denote the set of all $V\to W$ linear maps. Prove that $\hom(V,W)$ is a vector space under the natural ooprations. Prove: $\dim \hom(V,W) = k\ell$ where $k=\dim V$ and $\ell = \dim W$.

14.6 DO: (a) Let $D$ denote the $d/dx$ operator on $\rrr[x]$, i.e., if $f\in\rrr[x]$ then $Df=f'$. Let $M$ be the $f\mapsto xf$ operator ($Mf=xf$). Observe that both of these operators are linear transformations of $\rrr[x]$. Prove: $DM-MD=I$ where $I$ denotes the identity operator ($If=f$).   (b+) Prove: this cannot happen in a finite-dimensional space: if $A, B\in M_n(\rrr)$ then $AB-BA\neq I$. ($M_n(\rrr)$ denotes the set of $n\times n$ matrices with real entries.)

14.7 DO: Recall that the coordinates of $v\in V$ with respect to the basis ${\underline b}=(b_1,\dots,b_n)$ are the unique coefficients $\alpha_1,\dots,\alpha_n$ such that $v=\sum_{i=1}^n \alpha_i v_i$. We form a column vector from the coordinates: $[v]_{\underline b} = [\alpha_1,\dots,\alpha_n]^T$. Note that the map $v\mapsto [v]_{\underline b}$ is a $V\to \rrr^n$ isomorphism.

14.8 DO (matrix of a linear map): Let $\vf\,:\,V\to W$ be a linear map. Let ${\underline e}=(e_1,\dots,e_n)$ be a basis of $V$, and ${\underline f}=(f_1,\dots,f_k)$ a basis of $W$. Recall that we define the matrix $A=[\vf]_{\underline{e},\underline{f}}$ by making the vector $[\vf(e_j)]_{\underline{f}}$ the $j$-th column of $A$. Prove: for all $v\in V$ we have $[\vf(v)]_{\underline f}= [\vf]_{\underline{e},\underline{f}}[v]_{\underline{e}}$.

14.9+ DO: Let $V,W,Z$ be vector spaces with respective bases $\underline e$, $\underline f$, $\underline g$. Let $\vf \,:\, V\to W$ and $\psi\,:\, W\to Z$ be linear maps. Prove: $[\psi\vf]_{\underline e,\underline g}= [\psi]_{\underline f,\underline g}[\vf]_{\underline e,\underline f}.$

14.10 DO (rotation matrix): Let $R_{\theta}$ denote the rotation of the plane about the origin by $\theta$. (This is a linear transformation of the plane.)
(a) Let $e_1$ and $e_2$ be two perpendicular unit vectors in the plane. Consider the matrix of $R_{\theta}$ with respect to the basis $(e_1,e_2)$. Prove: $$[R_{\theta}]_{(e_1,e_2)} = \left(\matrix{\cos \theta & -\sin \theta \\ \sin \theta & \cos \theta}\right).$$ (b) Consider the identity $R_{\alpha}R_{\beta}=R_{\alpha+\beta}$. Apply the matrix identity of the preceding exercise to this product of linear transformations; infer the basic trigonometric identities (expansion of $\cos(\alpha+\beta)$ and $\sin(\alpha+\beta)$).

14.11+ DO (effect of basis change on coordinates of vector): Let ${\underline e} = (e_1,\dots,e_n)$ be the "old" basis of $V$ and let ${\underline e'} = (e_1',\dots,e_n')$ be the "new" basis. Consider the "basis change matrix" $S$ of which the $j$-th column is $[e_j']_{\underline e}$. Prove: $[v]_{\underline e'}=S^{-1} [v]_{\underline e}$.

14.12+ DO (effect of basis change on coordinates of linear map): Prove: if ${\underline e'}=(e_1',\dots,e_n')$ is another basis of $V$ and ${\underline f'}=(f_1',\dots,f_n')$ is another basis of $W$ (the "new bases") then $[\vf]_{\underline{e'},\underline{f'}}= T^{-1} [\vf]_{\underline{e},\underline{f}}S$ where $S$ is the $\underline{e}\mapsto \underline{e'}$ basis change matrix and $T$ the $\underline{f}\mapsto \underline{f'}$ basis change matrix.

14.13 DO: Let $R_{\theta}$ denote the rotation of the plane by $\theta$ about the origin. Compute the matrices of $R_{\theta}$ (a) with repect to a pair of perpendicular unit vectors $e_1,e_2$;   (b) with respect to the basis consisting of a unit vector $f_1$ and $f_2=R_{\theta}(f_1)$. (c+) Verify the result stated in the preceding problem for this case by direct calculation.   (d) Verify that the two matrices have the same trace and the same determinant.

14.14 DO: (a) Recall that the $n\times n$ matrices $A,B$ are similar (notation: $A \sim B$) if there exists a matrix $S$ such that $B=S^{-1}AS$. Observe that this means that $A$ and $B$ describe the same linear transformation with respect to different bases.
(b) $A$ is diagonalizable if $A$ is similar to a diagonal matrix. Prove: $A$ is diagonalizable if and only if $A$ has an eigenbasis (i.e., $\rrr^n$ has a basis consisting of eigenvectors of $A$).

HW 14.15: Prove: (a) the matrix $A = \left(\matrix{1 & 1 \\ 0 & 1}\right)$ is not diagonalizable.   (b) $B = \left(\matrix{1 & 1 \\ 0 & 2}\right)$ is diagonalizable.   Your proofs should avoid any calculation. You may use the results stated in the "DO" exercises or previous homework. In particular (hint!) use the fact that the eigenvalues of a triangular matrix are the numbers appearing in the diagonal (Exercise 14.32 below).     (8+8 points)

14.16 DO (nonsingular matrices): Prove that for the matrix $A\in M_n(\rrr)$ the following are equivalent: (a) $\rank(A)=n$   (b) the columns of $A$ are linearly independent   (c) the rows of $A$ are linearly independent   (d) the columns span $\rrr^n$   (e) the rows span $\rrr^n$   (f) the system $Ax=0$ of homogeneous linear equations ($n$ equations for the $n$ unkowns combined in the vector $x\in \rrr^n$) has no nontrivial solution   (g) the system $Ax=b$ of linear equations has a solution for every $b\in \rrr^n$   (h) $A^{-1}$ exists   (i) $\det(A)\neq 0$. (For the equivalence of (i) with the rest, see below.) -- If an $n\times n$ matrix has any of these properties (and therefore it has all of them), we call the matrix nonsingular.

14.17 DO: Recall the definition of the determinant function $\det\,:\,M_n(\rrr)\to \rrr$: for $A=(\alpha_{ij})\in M_n(\rrr)$ we let $$\det(A) = \sum_{\sigma\in S_n} \sgn(\sigma)\prod_{i=1}^n \alpha_{i,\sigma(i)},$$ where $S_n$ is the set of all permutations of $[n]=\{1,2,\dots,n\}$ (i.e., bijections $[n]\to[n]$) and $\sgn(\sigma)=(-1)^{\inv(\sigma)}$ where $\inv(\sigma)$ is the number of inversions of $\sigma$, i.e., the number of pairs $\{i, j\}$ such that $i < j$ but $\sigma(i) > \sigma(j)$.

14.18 DO: Prove: if a column of $A$ is zero then $\det(A)=0$.

14.19+ DO: Prove: $\inv(\sigma^{-1})=\inv(\sigma)$. Infer from this that $\det(A^T)=\det(A)$.

14.20 DO: Prove: the determinant is a linear function of its $j$-th column. This means the following. Let $f(x)$ denote the determinant of the matrix $A$ with the $j$-th column replaced by $x\in \rrr^n$. (We fix all other columns but vary $x$.) Then $f(x+y)=f(x)+f(y)$ and $f(\alpha x)=\alpha f(x)$. We say that the determinant is an $n$-linear form (linear in each of its $n$ columns).

14.21+ DO: Prove: if we swap two columns then the determinant changes sign. Combined with the previous property, we say that the determinant is an alternating $n$-linear form.

14.22+ DO (Uniqueness of determinant): Prove: If $f\,:\,M_n(\rrr)\to\rrr$ is an $n$-linear alternating form and $f(I)=1$ then $f =\det$.

14.23+ DO: Prove: elementary column operations do not change the determinant. (An elementary column operation adds a scalar multiple of a column to another column. Only the other column changes.) Infer that if the columns are linearly dependent then the determinant is zero.

14.24+ DO: Suppose the columns of $A$ are linearly independent. Use a sequence of elementary column operations to obtain a matrix which has exactly one nonzero element in each row and in each column. Prove that the determinant of such a matrix is not zero. This will complete the proof that $\det(A)\neq 0$ is equivalent to the other properties defining nonsingular matrices.

14.25 DO: Prove: the determinant of a triangular matrix is the product of the diagonal.

14.26 DO*: Prove: $\det(AB)=\det(A)\det(B)$.

14.27 DO: Prove: (a) $\det(A^{-1}) =\frac{1}{\det(A)}$.   (b) If $A\sim B$ then $\det(A) =\det(B)$.

14.28 DO: Review linear algebra, replacing $\rrr$ with $\ccc$ (complex numbers) in the role of scalars. Nothing we said so far changes.

14.29 DO (characteristic polynomial): For $A\in M_n(\ccc)$ (complex $n\times n$ matrix), the characteristic polynomial is defined as $f_A(t)=\det(tI-A)$.   (a) Verify that this is a polynomial of degree $n$ in the variable $t$.   (b+) Verify that the coefficient of $t^n$ is 1, the coefficient of $t^{n-1}$ is $-\trace(A)$, and the constant term is $(-1)^n\det(A)$.

14.30 DO: Prove: the complex number $\lambda$ is an eigenvalue of $A$ if and only if $f_A(\lambda)=0$.

14.31+ DO: The Fundamental Theorem of Algebra provides that $f_A(t)$ can be written as $\prod_{j=1}^n(t-\lambda_j)$. Here some of the $\lambda_j$ may be equal; the number of times $\lambda_j$ occurs is the multiplicity of the eigenvalue $\lambda_j$. Prove: $\sum_{j=1}^n\lambda_j =\trace(A)$ and $\prod_{i=1}^n \lambda_j = \det(A)$.

14.32 DO: Prove: if $A =(\alpha_{ij})\in M_n(\ccc)$ is a triangular matrix then $f_A(t)=\prod_{i=1}^n (t-\alpha_{ii})$. In particular, the eigenvalues of a triangular matrix are the numbers in the diagonal.

14.33 DO (characteristic polynomial of a linear trabsformation): Prove: if $A\sim B$ then $f_A(t)=f_B(t)$. (Similar matrices have identical characteristic polynomials.) Use this to define the characteristic polynomial of a linear transformation. Why does the polynomial you define not depend on the choice of the basis?

14.34 HW: Find two matrices $A,B\in M_2(\rrr)$ such that $f_A(t)=f_B(t)$ (they have the same characteristic polynomial) but they are not similar. (Hint: look for triangular matrices. Your proof of each property required should be no more than one line, with reference to facts listed above.)    (8 points)

14.35 HW: Consider the rotation matrix $A_{\theta}=[R_{\theta}]_{(e_1,e_2)}$ defined in Exercise 14.10. View it as a matrix over the complex numbers: $A_{\theta}\in M_2(\ccc)$. Find its (complex) eigenvalues and an eigenbasis. Make the eigenbasis as simple as you can.    (10 points)

Go to top


Homework set #13. (Due November 27) (posted Nov 20, 3pm)

13.1 DO: look up the abstract definition of vector spaces. Verify that for any set $\Omega$, the space $\rrr^{\Omega}$ of functions $f\,:\,\Omega\to\rrr$ is a vector space.

13.2 DO: Review the definition of matrix multiplication. Verify: if the columns of the matrix $B$ are $b_1,\dots,b_m$ and $A$ is a matrix then the columns of $AB$ are $Ab_1,\dots,Ab_m$. (When writing the matrix product $AB$, we assume the the number of columns of $A$ is the same as the number of rows of $B$ so the matrix product $AB$ is defined.)

13.3 DO: Verify: $(AB)^T=B^TA^T$. (Here $A,B$ are matrices and $A^T$ denotes the transpose of $A$.)

13.4 DO: Let $A=(\alpha_{i,j})$ be an $n\times n$ matrix. The trace of $A$ is the sum of the diagonal entries: $\trace(A)=\sum_{i=1}^n \alpha_{i,i}$. Prove: if $A$ is an $k\times n$ matrix and $B$ is an $n\times k$ matrix then $\trace(AB)=\trace(BA)$.

13.5 DO: Review the definition of a subspace; notation: $U\le V$ denotes the statement that $U$ is a subspace of the vector space $V$. Note that the $0$ vector belongs to every subspace.

13.6 DO: Let $U_1,U_2$ be subspaces of the vector space $V$. Prove: (a) $U_1\cap U_2$ is a subspace.   (b) $U_1\cup U_2$ is a subspace if and only if $U_1\subseteq U_2$ or $U_2\subseteq U_1$.

13.7 DO: Recall that the span of a set of vectors is the set of their linear combinations. Prove: the span of a set of vectors is always a subspace. This is true even of the empty set: $\span(\emptyset)=\{0\}$.

13.8 DO: Let $\nnn$ denote the set of nonnegative integers; so $\rrr^{\nnn}$ is the set of infinite sequences of reals. This is a vector space. Let us say that a sequence $a=(\alpha_0,\alpha_1,\dots)$ is of Fibonacci type if $(\forall i)(\alpha_{i+2}=\alpha_{i+1}+\alpha_i)$. Let $\fib$ denote the set of Fibonacci-type sequences. Prove: $\fib \le \rrr^{\nnn}$, i.e., $\fib$ is a subspace of $\rrr^{\nnn}$.

13.9 DO: Review the definition of linear independence. Prove that the columns of the identity matrix are linearly independent.

13.10 HW: Let $A=(\alpha_{i,j})$ be an $n\times n$ upper triangular matrix (i.e., if $i > j$ then $\alpha_{i,j} = 0$). Assume that none of the diagonal elements are zero: $(\forall i)(\alpha_{i,i}\neq 0)$. Prove that the columns of $A$ are linearly independent.    (6 points)

13.11 HW: Assume $\alpha_1,\dots, \alpha_n$ are distinct real numbers. Consider the polynomial $g(x)=\prod_{i=1}^n (x-\alpha_i)$ and the polynomials $f_j(x)=g(x)/(x-\alpha_j)$. Prove: $f_1,\dots,f_n$ are linearly independent.    (9 points)

13.12 DO: Study Gaussian elimination. Use it to prove the "Second Miracle of Linear Algebra:" the row-rank of a matrix is equal to its column-rank.

13.13 DO: Prove: the rows of an $n\times n$ matrix are linearly independent if and only if its columns are linearly independent.

13.14 DO: The degree of a polynomial is the largest $k$ such that the coefficient of $x^k$ is not zero. The zero polynomial has degree $-\infty$. Prove: if a polynomial $f(x)$ has degree $n\ge 0$ then $f$ has at most $n$ zeros. (A "zero of $f$" is a number $\alpha$ such that $f(\alpha)=0$.)

13.15 HW: Let $\alpha_1,\dots,\alpha_n\in\rrr$. The $n\times n$ Vandermonde matrix $V(\alpha_1,\dots,\alpha_n)=(\beta_{i,j})$ is defined by $\beta_{i,j} = \alpha_j^{i-1}$. Prove that the columns of this matrix are linearly independent if and only if all the $\alpha_i$ are distinct. (Hint to the "if" part: Prove that the rows are linearly independent. Use the preceding exercise.)    (8 points)

13.16 DO: Recall that a basis of a vector space is a list of vectors that are linearly independent and span the space. Prove: a list $b_1,\dots,b_n\in V$ of vectors is a basis if and only if every vector can be written uniquely as a linear combination of the $b_i$, i.e., $(\forall v\in V)(\exists ! \alpha_1,\dots,\alpha_n\in \rrr)(v=\sum_{i=1}^n \alpha_i b_i)$.

13.17 DO: Prove: every maximal linearly independent set is a basis. (A linearly independent set is maximal if no vector can be added to it while maintaining linear independence.)

13.18 DO: Prove: every linearly independent list of vectors can be extended to a basis. (Either assume there is a finite upper bound on the cardinalities of all linearly independent sets (the space is "finite dimensional") or use Zorn's lemma.)

13.19 DO: Prove: every set of generators of $V$ includes a basis. (Same hint as in the previous exercise.)

13.20 DO: Recall the "First Miracle of Linear Algebra": if $a_1,\dots,a_k$ are linearly independent and all of them belong to the span of $b_1,\dots,b_m$ then $k\le m$. Use this to prove that all bases have the same cardinality. This common value is called the dimension of the space. Prove: $\dim(\rrr^n)=n$. Show that the columns of the $n\times n$ identity matrix form a basis of $\rrr^n$. This is called the "standard basis."

13.21 DO: Find the dimension of $\fib$. Find a basis. Your basis should be very simple, "natural."

13.22 DO: Let $U_1, U_2$ be subspaces. Define $U_1+U_2$ as the set of vectors $\{u_1+u_2\mid u_i\in U_i\}$. (a) Prove that $U_1+U_2$ is a subspace. (b) Prove: $$ \dim(U_1\cap U_2) + \dim (U_1+U_2) = \dim (U_1)+\dim(U_2).$$

13.23 DO: Recall that the column space of a matrix is the span of its columns. Prove: the dimension of the column space is equal to the column-rank of the matrix.

13.24 DO: Prove: $\rank(A+B)\le \rank(A)+\rank(B)$.

13.25 DO: Let $A$ be an $n\times n$ matrix. Recall that $v\in \rrr^n$ is an eigenvector of $A$ to eigenvalue $\lambda$ if $v\neq 0$ and $Av=\lambda v$. Suppose $D$ is a diagonal matrix (all off-diagonal entries are zero). Determine the eigenvectors and the corresponding eigenvalues of $D$.

13.26 HW: Assume $v_1,\dots,v_k$ are eigenvectors corresponding to distinct eigenvalues of the $n\times n$ matrix $A$. Prove: $v_1,\dots,v_k$ are linearly independent.    (8 points)

13.27 DO: Let $S$ denote the left shift operator on $\rrr^{\nnn}$: if $a=(\alpha_0,\alpha_1,\alpha_2,\dots)$ then $S(a) = (\alpha_1,\alpha_2,\alpha_3,\dots)$. Prove that this is a linear transformation of the space $\rrr^{\nnn}$, i.e., $S(a+b)=S(a)+S(b)$ and $S(\beta a)=\beta S(a)$ for all sequences $a,b$ and all scalars $\beta\in\rrr$.

13.28 HW: Find all eigenvectors and the corresponding eigenvalues of the left shift operator on $\rrr^{\nnn}$. (So you are looking for nonzero sequences $a$ and real numbers $\lambda$ such that $S(a) = \lambda a$.)    (5 points)

13.29 DO: Prove: $\fib$ is invariant under the shift, i.e., if $b\in\fib$ then $S(b)\in \fib$.

13.30 HW: Find a basis of $\fib$ that consists of eigenvectors of the shift operator. Determine the corresponding eigenvalues.    (6 points)

13.31 CHALLENGE: Let $A$ be an $k\times n$ $(0,1)$-matrix (all entries are 0 or 1). Suppose no two columns are equal. Prove: $\rank(A)\ge \log_2 n$.

13.32 CHALLENGE: Prove: given a nonzero polynomial $f$ there exists a nonzero polynomial $g$ such that all exponents occurring in $fg$ are prime numbers. (E.g., the polynomial $x^{13}+8x^7-20 x^3+\sqrt{5}x^2$ has only prime exponents.)

Go to top


Homework set #12. (Due November 20) (posted Nov 15, 5am)

12.1 DO: solve the Midterm problems, including the bonus problems.

12.2 DO: study directed graphs (digraphs) from LN Chap 6.2, especially the terminology defined on pages 56 and 58

12.3 DO: LN Exercises 6.2.1 - 6.2.5

12.4 DO: LN Ex. 6.2.14 (min no edges in weakly/strongly conn digraph)

12.5 HW: LN Ex. 6.2.15 (if indegree = outdegree and digraph weakly connected then strongly connected)   (10 points)

12.6 DO: LN 6.2.16 and 6.2.17 (Hamiltonicity of tournaments)

12.7 DO: LN 6.2.18 (which tournament is a DAG)

12.8 DO: LN 6.2.19 - 6.2.20 (topological sort)

12.9 DO: LN 6.2.21 (transpose of adjacency matrix)

12.10 HW: LN Ex. 6.2.22 (counting walks) (Hint: induction on $k$)   (10 points)

12.11 DO: LN 6.2.24 (find strong components in picture)

12.12 HW: LN Ex. 6.2.27 (periods in strong component are equal) (the "period" of a vertex is defined in Def. 6.2.26)   (10 points)

12.13 HW: Prove: if every vertex in a digraph $G$ has outdegree $\le k$ then $\chi(G)\le 2k+1$. (Legal colorings of a digraph are defined ignoring orientation: if there is an edge between two vertices, they must have different colors.)   (10 points)

12.14 DO: Prove that the $2k+1$ bound in the previous exercise is best possible: for every $k$ there exists a digraph such that each outdegree is $\le k$ and the chromatic number is $2k+1$.

Go to top


If you hand in solutions to CHALLENGE PROBLEMS, please put them on a separate sheet clearly marked "CHALLENGE."

Go to top


Review old homework

View the instructor's class material from previous years

Return to the Department of Computer Science home page

Course home

Go to top