Explanation of MLE for Covariance
The Goal
We want to find the covariance matrix that describes the "spread" and "shape" of the data cloud which maximizes the likelihood of observing the data samples .
The Process
-
Trace Trick: The expression involved in the Gaussian PDF, , is a scalar. The trace of a scalar is the scalar itself. A convenient property of trace is cyclic permutation: . Using this, we can move the vectors around to form the "outer product" , which looks like a covariance matrix structure. This allows us to group all the data summation into a single matrix , the scatter matrix.
-
Derivative of Determinant: The likelihood term contains . The derivative of is related to the inverse of the matrix, . Intuitively, maximizing the likelihood prevents the determinant (volume of the probability density) from collapsing to zero or going to infinity without bound in relation to the exponential decay term.
-
Derivative of Inverse Trace: The exponential term involves . Differentiating the inverse of a matrix function is slightly more complex, but the provided identity simplifies it. It essentially comes from the rule .
-
Balancing Act: The derivative equation represents a balance. The first term comes from the normalization constant (trying to make small to increase density), and the second term comes from the exponential "error" (trying to make large to accommodate the data spread).
-
Result: The solution basically says the most likely covariance shape is exactly the average empirical covariance of the data points. (Note: In standard statistics, we often divide by for an unbiased estimator, but the pure MLE divides by ).