skip to content

From Similarity to Eigenvalues

/ 4 min read

Table of Contents

I. The Core Philosophy: Operators vs. Matrices

To understand linear algebra deeply, we must distinguish between the Linear Operator (the “Soul”) and the Matrix (the “Body”).

  • The Operator (TT): An abstract, physical transformation of a vector space VV. TL(V)T \in \mathcal{L}(V) It exists independently of any coordinate system (e.g., rotation, stretching, shearing).
  • The Matrix (AA): A numerical snapshot of operator TT observed from a specific basis (coordinate system).

II. Similarity: The Change of Perspective

Two matrices AA and BB are called similar (ABA \sim B) if they represent the same linear operator TT but viewed through different bases. Mathematically, this is expressed as:

A=PBP1A = P B P^{-1}

The “Translation” Mechanism

The conjugation PBP1P B P^{-1} can be understood as a three-step process, often visualized via a commutative diagram:

  1. P1P^{-1} (Translate): Convert a vector from our standard basis into the “new” basis (where BB lives).
  2. BB (Process): Apply the transformation in that new coordinate system.
  3. PP (Translate Back): Convert the result back to our standard basis.

Key Insight: Since AA and BB are just different descriptions of the same underlying operator, they share invariant properties: det(A)=det(B),tr(A)=tr(B),rank(A)=rank(B)\det(A) = \det(B), \quad \operatorname{tr}(A) = \operatorname{tr}(B), \quad \operatorname{rank}(A) = \operatorname{rank}(B)

III. Diagonalization: The Search for Simplicity

If similarity is about changing perspectives, diagonalization is the search for the perfect perspective. We seek a basis B\mathcal{B} in which the operator TT behaves in the simplest possible way: pure scaling along the axes.

  • In this basis, the matrix representation DD is diagonal.
  • The action of the operator becomes decoupled: dimensions do not interfere with one another.
  • This is only possible if we can find a “Change of Basis” matrix PP such that: P1AP=D=diag(λ1,,λn)P^{-1}AP = D = \operatorname{diag}(\lambda_1, \dots, \lambda_n)

IV. Eigenvectors: The Perfect Basis

To achieve a diagonal matrix, our new basis vectors {v1,v2,,vn}\{v_1, v_2, \dots, v_n\} must satisfy a strict condition: The operator must not rotate or shear them; it must only stretch them.

If viv_i is a basis vector and the matrix is diagonal, then the operator’s action on viv_i must be: T(vi)=λiviT(v_i) = \lambda_i v_i

This is exactly the definition of an Eigenvector.

The Geometric Intuition

  • Eigenvectors (vv) are the “preferred directions” of the operator—the axes of the universe that remain stable (invariant 1D subspaces) during the transformation.
  • Eigenvalues (λ\lambda) are simply the scaling factors along those stable axes.

V. The Null Space Connection

To find these stable vectors, we solve the characteristic equation. But geometrically, why does the eigenvector xx reside in the null space of (λIA)(\lambda I - A)?

We rewrite Ax=λxAx = \lambda x as: (λIA)x=0(\lambda I - A)x = 0

The “Cancellation of Forces”

Imagine two competing transformations acting on vector xx:

  1. AxAx: The complex action of the matrix AA trying to transform xx.
  2. λIx\lambda I x: A pure, uniform scaling action.

For xx to be an eigenvector, the action of AA must be identical to the action of the scalar λ\lambda. Therefore, the difference between them must be zero.

  • The matrix (λIA)(\lambda I - A) measures the “deviation” between the operator AA and pure scaling.
  • Vectors in the Null Space (N\mathcal{N}) of this difference matrix have “zero deviation.” They are the directions where AA behaves exactly like a scalar multiplication.

VI. The Condition: Algebraic vs. Geometric Multiplicity

For an operator to be diagonalizable, we need enough eigenvectors to form a complete basis (a full coordinate system).

  • Algebraic Multiplicity (nan_a): The number of times a root λ\lambda appears in the characteristic polynomial det(λIA)=0\det(\lambda I - A) = 0. This is the “promised” dimensional space.
  • Geometric Multiplicity (ngn_g): The actual dimension of the null space dimN(λIA)\dim \mathcal{N}(\lambda I - A). This is the “actual” number of independent directions found.

The Theorem

A matrix is diagonalizable if and only if Geometric Multiplicity = Algebraic Multiplicity for every eigenvalue (ng=nan_g = n_a). This state is often called “Semi-simple”.

ng<na    Defective Matrix (Non-diagonalizable)n_g < n_a \implies \text{Defective Matrix (Non-diagonalizable)}

If ng<nan_g < n_a, the space “collapses,” and we cannot form a complete basis of eigenvectors to diagonalize the matrix.