Answer
Prerequisites
- Calculus: Partial differentiation of logarithmic sums.
- Optimization: The Method of Lagrange Multipliers for constrained optimization problems.
- Probability Theory: The constraint that categorical probabilities must sum to 1.
Step-by-Step Derivation
Step 1: Define the objective and constraint functions We want to maximize the objective function:
subject to the equality constraint:
(Note: While there is also an inequality constraint , we can proceed by ignoring it temporarily to find a stationary point, and then verify that our solution satisfies it since .)
Step 2: Form the Lagrangian Using the formulation (or using addition , the resulting scalar multiplier just changes sign), we construct the Lagrangian function. Let's use subtraction to match common conventions that yield a positive scale, though either is computationally identical:
Step 3: Find the stationary point with respect to We take the partial derivative of the Lagrangian with respect to a specific component and set it to zero:
Step 4: Express in terms of Rearranging the equation to solve for :
Step 5: Enforce the equality constraint to solve for We substitute our expression for back into the original constraint :
Since is a constant multiplier that does not depend on , we can pull it out of the summation:
Step 6: Substitute back to find the final solution Plugging (using index to avoid confusion with the specific ) back into our expression for from Step 4:
Verification: Given that for all , it is clear that , thus satisfying the bounds constraint. The solution is complete.