c. Consider the following model :
yi = β0+ β1xi+ εi (i=1,…….,n)
where xi = i/n and { εi } are independent N (0,1) random variables. For least squares estimation, the variance of ˆβ1 tends to 0 like constant/n as n > ∞.
For LMS estimation, var(ˆβ1) / n for some > 0 and > 0. In lecture, we claimed that = 2/3. The theoretical proof of this is very technical; however, it is possible to estimate via simulation.
The idea here is very simple : we can compute ˆβ1 based on n observations and replicate this process M times, which results in M values of ˆβ1. We can then estimate var(ˆβ1) from these M values, in which case
ˆvar(ˆβ1) / n
Or in(ˆvar(ˆβ1) ) in () – in (n)
Repeating this process for a range of sample sizes allows us estimate . Estimate based on sample sizes n= 50, 100, 1000, 5000.
Solution :
d. Repeat part (c) using Cauchy errors. ( for Cauchy errors, the variance of the least squares estimator does not tend to 0 as n > ∞.) Do you get a similar value of ?
Answer:
UHHH
that’s too much
even 12 will not get em
tho hope it helps!