Skip to main content

Answer

Prerequisites

  • Expectation and Variance Properties
  • Unbiased Estimator
  • Properties of independent and identically distributed (i.i.d.) random variables

Step-by-Step Derivation

  1. An estimator λ^\hat{\lambda} is considered unbiased if its expected value is equal to the true underlying parameter, i.e., E[λ^]=λ\mathbb{E}[\hat{\lambda}] = \lambda.

  2. Recall the ML estimator from part (a): λ^=1Ni=1Nki\hat{\lambda} = \frac{1}{N}\sum_{i=1}^N k_i

  3. Calculate the expectation of λ^\hat{\lambda} using the linearity of expectation: E[λ^]=E[1Ni=1Nki]=1Ni=1NE[ki]\mathbb{E}[\hat{\lambda}] = \mathbb{E}\left[\frac{1}{N}\sum_{i=1}^N k_i\right] = \frac{1}{N}\sum_{i=1}^N \mathbb{E}[k_i]

  4. Given that each kik_i is drawn from a Poisson distribution with parameter λ\lambda, we know the expected value of a single sample is E[ki]=λ\mathbb{E}[k_i] = \lambda: E[λ^]=1Ni=1Nλ=1N(Nλ)=λ\mathbb{E}[\hat{\lambda}] = \frac{1}{N} \sum_{i=1}^N \lambda = \frac{1}{N} (N\lambda) = \lambda Since E[λ^]=λ\mathbb{E}[\hat{\lambda}] = \lambda, the ML estimator is unbiased.

  5. Next, calculate the variance of the estimator, var(λ^)\text{var}(\hat{\lambda}): var(λ^)=var(1Ni=1Nki)\text{var}(\hat{\lambda}) = \text{var}\left(\frac{1}{N}\sum_{i=1}^N k_i\right)

  6. By the properties of variance, multiplying a random variable by a constant cc scales the variance by c2c^2. Also, because the samples kik_i are independent, the variance of their sum is the sum of their individual variances: var(λ^)=1N2i=1Nvar(ki)\text{var}(\hat{\lambda}) = \frac{1}{N^2} \sum_{i=1}^N \text{var}(k_i)

  7. For a Poisson distribution, we are given that var(ki)=λ\text{var}(k_i) = \lambda. Substituting this into the equation yields: var(λ^)=1N2i=1Nλ=1N2(Nλ)=λN\text{var}(\hat{\lambda}) = \frac{1}{N^2} \sum_{i=1}^N \lambda = \frac{1}{N^2} (N\lambda) = \frac{\lambda}{N}

Thus, the estimator variance is indeed λN\frac{\lambda}{N}.