Quasi-Newton methods in optimization

Consider the unconstrained minimization problem where is twice differentiable and . Newton’s method is a second-order descent method for finding the minimum. Starting at some initial point, at the th iteration we update the candidate solution with the formula where and are the gradient and Hessian of respectively, and is a step size chosen appropriately […]

Quasi-Newton methods in optimization
Advertisement

Rank and trace are equal for a real symmetric idempotent matrix

Mathematical Odds & Ends

Proposition. Let $latex mathbf{X} in mathbb{R}^{n times n}$ be a matrix that is symmetric ($latex mathbf{X}^top = mathbf{X}$) and idempotent ($latex mathbf{X}^2 = mathbf{X}$). Then the rank of $latex mathbf{X}$ is equal to the trace of $latex mathbf{X}$. In fact, they are both equal to the sum of the eigenvalues of $latex mathbf{X}$.

The proof is relatively straightforward. Since $latex mathbf{X}$ is real and symmetric, it is orthogonally diagonalizable, i.e. there is an orthogonal matrix $latex mathbf{U}$ ($latex mathbf{U}^top mathbf{U} = mathbf{I}$) and a diagonal matrix $latex mathbf{D}$ such that $latex mathbf{D} = mathbf{UXU}^top$ (see here for proof).

Since $latex mathbf{X}$ is idempotent,

$latex begin{aligned} mathbf{X}^2 &= mathbf{X},
mathbf{U}^top mathbf{D}^2 mathbf{U} &= mathbf{U}^T mathbf{DU},
mathbf{D}^2 &= mathbf{D}. end{aligned}$

Since $latex mathbf{D}$ is a diagonal matrix, it implies that the entries on the diagonal must be zeros or ones. Thus, the number of ones on the diagonal (which is $latex text{rank}(mathbf{D})…

View original post 16 more words