Skip to main content

Problem 2.1 (a)

Pre-required Knowledge

  • Poisson Distribution: A discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space.
    • PMF: P(X=k)=λkeλk!P(X=k) = \frac{\lambda^k e^{-\lambda}}{k!}
  • Maximum Likelihood Estimation (MLE): A method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model, the observed data is most probable.
  • Log-Likelihood: The natural logarithm of the likelihood function. Maximizing the log-likelihood is equivalent to maximizing the likelihood but usually easier mathematically.

Step-by-Step Answer

  1. Write down the Likelihood Function: Given NN i.i.d. samples {k1,k2,,kN}\{k_1, k_2, \dots, k_N\}, the likelihood function L(λ)L(\lambda) is the product of the individual probability mass functions:

    L(λ)=i=1Np(kiλ)=i=1N1ki!eλλkiL(\lambda) = \prod_{i=1}^{N} p(k_i | \lambda) = \prod_{i=1}^{N} \frac{1}{k_i!} e^{-\lambda} \lambda^{k_i}
  2. Write down the Log-Likelihood Function: Taking the natural logarithm of the likelihood function converts the product into a sum, which is easier to differentiate:

    (λ)=lnL(λ)=i=1Nln(1ki!eλλki)=i=1N(ln(ki!)λ+kiln(λ))=i=1Nln(ki!)i=1Nλ+i=1Nkiln(λ)=i=1Nln(ki!)Nλ+ln(λ)i=1Nki\begin{aligned} \ell(\lambda) = \ln L(\lambda) &= \sum_{i=1}^{N} \ln \left( \frac{1}{k_i!} e^{-\lambda} \lambda^{k_i} \right) \\ &= \sum_{i=1}^{N} \left( -\ln(k_i!) - \lambda + k_i \ln(\lambda) \right) \\ &= -\sum_{i=1}^{N} \ln(k_i!) - \sum_{i=1}^{N} \lambda + \sum_{i=1}^{N} k_i \ln(\lambda) \\ &= -\sum_{i=1}^{N} \ln(k_i!) - N\lambda + \ln(\lambda) \sum_{i=1}^{N} k_i \end{aligned}
  3. Differentiate with respect to λ\lambda: To find the maximum, we compute the derivative of (λ)\ell(\lambda) with respect to λ\lambda:

    (λ)λ=N+1λi=1Nki\frac{\partial \ell(\lambda)}{\partial \lambda} = -N + \frac{1}{\lambda} \sum_{i=1}^{N} k_i
  4. Set the derivative to zero and solve for λ\lambda:

    N+1λi=1Nki=0-N + \frac{1}{\lambda} \sum_{i=1}^{N} k_i = 0 1λi=1Nki=N\frac{1}{\lambda} \sum_{i=1}^{N} k_i = N λ=1Ni=1Nki\lambda = \frac{1}{N} \sum_{i=1}^{N} k_i
  5. Conclusion: The maximum likelihood estimate λ^\hat{\lambda} is the sample mean:

    λ^=1Ni=1Nki\hat{\lambda} = \frac{1}{N} \sum_{i=1}^{N} k_i