The Model

Suppose \(X_1, \dots, X_n\) are a random sample from \(\mathrm{Unif}(0,\theta)\).

MLE

The MLE for \(\theta\) is \[\hat \theta_{MLE} = \mathrm{Max} \{X_i\}\]

MoM

THe Method of Moments estimator for \(\theta\) is \[\hat \theta_{MoM} = 2 \bar X\]

Claire Estimator

As suggested (by Claire) in class, another possible estimator for \(\theta\) is \[\hat \theta_{Claire} = \max \{X_i\} + \min \{X_i\}\] obtained by noting that the minimum of the sample tends to be as close to 0 as the maximum of the sample is from \(\theta\).

The Simulation

Let’s simulate 1000 samples of size 20 with \(\theta = 10\), then compute the MLE, the MoM estimator and the “Claire” estimator, for each of the samples.

Using a for loop.

k <- 20 # sample size
n <- 1000 # number of experiments
theta <- 10

mle <- rep(0,n) #initialize mle
mom <- rep(0,n) # initialize mom
claire_estimator <- rep(0,n)

# for loop to generate samples, calculuate estimators for each sample

for (i in 1:n){
  my_sample <- runif(k, 0, 10) # generate a sample of size k from Unif(0,10)
  mle[i] <- max(my_sample) #compute MLE
  mom[i] <- 2*mean(my_sample) #Compute Method of Moments
  claire_estimator[i] <- max(my_sample) + min(my_sample) # Compute "Claire" estimator
}

Compute Summary Statistics for Sampling Distributions

mean(mle)
## [1] 9.5383
mean(mom)
## [1] 10.04845
mean(claire_estimator)
## [1] 10.0268
sd(mle)
## [1] 0.4293941
sd(mom)
## [1] 1.309313
sd(claire_estimator)
## [1] 0.6527711

Fast Histograms

hist(mle)

hist(mom)

hist(claire_estimator)

Note that the MLE tends to understande the true parameter (since the max of a simple will always be less than \(\theta\)), while both the MoM and the “Claire” estimator will have an average value equal to \(\theta\). However, the MLE will have less standard deviation that the “Claire” estimator, which has less standard deviation than the MoM estimator.