UA MATH566 统计理论 QE练习 位置变换后的指数分布
UA MATH566 統計理論 QE練習 位置變換后的指數分布
- 2016年1月第六題
- 2018年5月第六題
2016年1月第六題
Part a
Joint likelihood is
L(θ)=exp?(?∑i=1n(X1?θ))=exp?(nθ?∑i=1nX1)I(X(1)≥θ)L(\theta) = \exp \left( - \sum_{i=1}^n (X_1 - \theta) \right) = \exp \left( n\theta- \sum_{i=1}^n X_1 \right)I(X_{(1)}\ge \theta)L(θ)=exp(?i=1∑n?(X1??θ))=exp(nθ?i=1∑n?X1?)I(X(1)?≥θ)
Compute likelihood ratio
L(θ∣X)L(θ∣Y)=exp?(nθ?∑i=1nX1)I(X(1)≥θ)exp?(nθ?∑i=1nY1)I(Y(1)≥θ)=exp?(∑i=1nYi?∑i=1nXi)I(X(1)≥θ)I(Y(1)≥θ)\frac{L(\theta|\textbf{X})}{L(\theta|\textbf{Y})} = \frac{\exp \left( n\theta- \sum_{i=1}^n X_1 \right)I(X_{(1)}\ge \theta)}{\exp \left( n\theta- \sum_{i=1}^n Y_1 \right)I(Y_{(1)}\ge \theta)} = \exp \left( \sum_{i=1}^nY_i - \sum_{i=1}^n X_i\right) \frac{I(X_{(1)}\ge \theta)}{I(Y_{(1)}\ge \theta)}L(θ∣Y)L(θ∣X)?=exp(nθ?∑i=1n?Y1?)I(Y(1)?≥θ)exp(nθ?∑i=1n?X1?)I(X(1)?≥θ)?=exp(i=1∑n?Yi??i=1∑n?Xi?)I(Y(1)?≥θ)I(X(1)?≥θ)?
To make likelihood ration independent of θ\thetaθ,
X(1)=Y(1)X_{(1)} = Y_{(1)}X(1)?=Y(1)?
So X(1)X_{(1)}X(1)? is minimal sufficient statistics.
Part b
EX=∫θ∞xe?(x?θ)dx=θ+1=Xˉ?θ^MME=Xˉ?1EX = \int_{\theta}^{\infty} xe^{-(x-\theta)} dx = \theta + 1 = \bar{X} \Rightarrow \hat\theta_{MME} = \bar{X}-1EX=∫θ∞?xe?(x?θ)dx=θ+1=Xˉ?θ^MME?=Xˉ?1
Part c
Notice if θ>X(1)\theta>X_{(1)}θ>X(1)?, L(θ)=0L(\theta) = 0L(θ)=0. To make L(θ)L(\theta)L(θ) as greater as possible, θ≤X(1)\theta \le X_{(1)}θ≤X(1)?. Since L(θ)L(\theta)L(θ) is increasing on θ\thetaθ, when θ^MLE=X(1)\hat\theta_{MLE} = X_{(1)}θ^MLE?=X(1)?
Part d
Compute
E[θ^MME]=E[Xˉ?1]=θVar(θ^MME)=Var(Xˉ)=1nE[\hat\theta_{MME}] = E[\bar{X}-1] = \theta \\ Var(\hat\theta_{MME}) = Var(\bar{X}) = \frac{1}{n}E[θ^MME?]=E[Xˉ?1]=θVar(θ^MME?)=Var(Xˉ)=n1?
By the property of order statistics, density of X1X_{1}X1? is
fX(1)=ne?n(x?θ)f_{X_{(1)}} = ne^{-n(x-\theta)}fX(1)??=ne?n(x?θ)
Compute
EX(1)=∫θ∞nxe?n(x?θ)dx=nθ+1nEX(1)2=∫θ∞nx2e?n(x?θ)dx=n2θ2+2nθ+2n2Var(X(1))=EX(1)2?EX(1)=1n2MSE(X(1))=bias2+1n2=2n2EX_{(1)} = \int_{\theta}^{\infty} nxe^{-n(x-\theta)}dx = \frac{n\theta + 1}{n} \\ EX_{(1)}^2 = \int_{\theta}^{\infty} nx^2e^{-n(x-\theta)}dx = \frac{n^2\theta^2 + 2n\theta + 2}{n^2} \\ Var(X_{(1)}) = EX_{(1)}^2 - EX_{(1)} = \frac{1}{n^2} \\ MSE(X_{(1)}) = bias^2 + \frac{1}{n^2} = \frac{2}{n^2}EX(1)?=∫θ∞?nxe?n(x?θ)dx=nnθ+1?EX(1)2?=∫θ∞?nx2e?n(x?θ)dx=n2n2θ2+2nθ+2?Var(X(1)?)=EX(1)2??EX(1)?=n21?MSE(X(1)?)=bias2+n21?=n22?
Part e
By Lehmann-Scheffe theorem, E[Xˉ?1∣X(1)]E[\bar{X}-1|X_{(1)}]E[Xˉ?1∣X(1)?] will be better. Or X(1)?1/nX_{(1)}-1/nX(1)??1/n.
2018年5月第六題
Part a
Joint likelihood function is
L(λ,θ)=∏i=1nf(Xi∣λ,θ)=λnexp?(?λ∑i=1n(Xi?θ))I(X(1)>θ)L(\lambda,\theta) = \prod_{i=1}^n f(X_i|\lambda,\theta) = \lambda^n\exp \left( -\lambda \sum_{i=1}^n (X_i-\theta) \right)I(X_{(1)}>\theta)L(λ,θ)=i=1∏n?f(Xi?∣λ,θ)=λnexp(?λi=1∑n?(Xi??θ))I(X(1)?>θ)
Consider two samples {Xi}i=1n\{X_i\}_{i=1}^n{Xi?}i=1n? and {Yi}i=1n\{Y_i\}_{i=1}^n{Yi?}i=1n?,
L(λ,θ∣X)L(λ,θ∣Y)=λnexp?(?λ∑i=1n(Xi?θ))I(X(1)>θ)λnexp?(?λ∑i=1n(Yi?θ))I(Y(1)>θ)=I(X(1)>θ)I(Y(1)>θ)exp?(?λ∑i=1n(Yi?Xi))\frac{L(\lambda,\theta|\textbf{X})}{L(\lambda,\theta|\textbf{Y})} = \frac{\lambda^n\exp \left( -\lambda \sum_{i=1}^n (X_i-\theta) \right)I(X_{(1)}>\theta)}{\lambda^n\exp \left( -\lambda \sum_{i=1}^n (Y_i-\theta) \right)I(Y_{(1)}>\theta)} \\ = \frac{I(X_{(1)}>\theta)}{I(Y_{(1)}>\theta)} \exp \left( -\lambda \sum_{i=1}^n (Y_i-X_i) \right)L(λ,θ∣Y)L(λ,θ∣X)?=λnexp(?λ∑i=1n?(Yi??θ))I(Y(1)?>θ)λnexp(?λ∑i=1n?(Xi??θ))I(X(1)?>θ)?=I(Y(1)?>θ)I(X(1)?>θ)?exp(?λi=1∑n?(Yi??Xi?))
To make this likelihood ratio independent of parameters,
X(1)=Y(1),∑i=1nXi=∑i=1nYiX_{(1)} = Y_{(1)},\ \ \sum_{i=1}^n X_i = \sum_{i=1}^n Y_iX(1)?=Y(1)?,??i=1∑n?Xi?=i=1∑n?Yi?
Let T1(X)=X(1)T_1(X) = X_{(1)}T1?(X)=X(1)?, T2(X)=∑i=1nXiT_2(X) = \sum_{i=1}^n X_iT2?(X)=∑i=1n?Xi? and they are minimal sufficient statistics.
Part b
If λ=1\lambda = 1λ=1,
fX(x)=e?(x?θ),x>θ,FX(x)=∫θxe?(s?θ)ds=1?e?(x?θ),x>θf_X(x) = e^{-(x-\theta)},x> \theta,\ F_X(x) = \int_{\theta}^{x} e^{-(s-\theta)}ds = 1 - e^{-(x-\theta)},x> \thetafX?(x)=e?(x?θ),x>θ,?FX?(x)=∫θx?e?(s?θ)ds=1?e?(x?θ),x>θ
Compute
P(X(1)≤x)=P(min?(X1,?,Xn)≤x)=1?P(min?(X1,?,Xn)>x)=1?[1?F(x)]nfX(1)(x)=n[1?F(x)]n?1f(x)=ne?(n?1)(x?θ)e?(x?θ)=ne?n(x?θ)P(X_{(1)} \le x) = P(\min(X_1,\cdots,X_n) \le x) = 1 - P(\min(X_1,\cdots,X_n) > x) = 1 - [1-F(x)]^n \\ f_{X_{(1)}}(x) = n[1-F(x)]^{n-1}f(x) = ne^{-(n-1)(x-\theta)}e^{-(x-\theta)} = ne^{-n(x-\theta)}P(X(1)?≤x)=P(min(X1?,?,Xn?)≤x)=1?P(min(X1?,?,Xn?)>x)=1?[1?F(x)]nfX(1)??(x)=n[1?F(x)]n?1f(x)=ne?(n?1)(x?θ)e?(x?θ)=ne?n(x?θ)
This means X(1)~θ+Gamma(1,n)X_{(1)} \sim \theta+Gamma(1,n)X(1)?~θ+Gamma(1,n)
Part c
Define Q=2n(X(1)?θ)Q = 2n(X_{(1)} - \theta)Q=2n(X(1)??θ). By this location-scale transformation, Q~χ22=dΓ(1,12)Q \sim \chi^2_2 =_d \Gamma(1,\frac{1}{2})Q~χ22?=d?Γ(1,21?). Let χy,22\chi^2_{y,2}χy,22? and χ1?α+y,22\chi^2_{1-\alpha+y,2}χ1?α+y,22? denote yyy and 1?α+y1-\alpha+y1?α+y quantile of χ22\chi^2_2χ22?.
P(χy,22≤Q≤χ1?α+y,22)=1?αP(X(1)?χy,222n≤θ≤X(1)?χ1?α+y,222n)=1?αP(\chi^2_{y,2} \le Q \le \chi^2_{1-\alpha+y,2}) = 1-\alpha \\ P\left( X_{(1)} - \frac{\chi^2_{y,2}}{2n} \le \theta \le X_{(1)} - \frac{\chi^2_{1-\alpha+y,2}}{2n} \right) = 1 - \alphaP(χy,22?≤Q≤χ1?α+y,22?)=1?αP(X(1)??2nχy,22??≤θ≤X(1)??2nχ1?α+y,22??)=1?α
Notice the length of confidential interval is
L=χ1?α+y,22?χy,222nL = \frac{\chi^2_{1-\alpha+y,2} -\chi^2_{y,2} }{2n}L=2nχ1?α+y,22??χy,22??
Let ZZZ denote standard normal variable. By normal approximation of chi-square distribution (see UA MATH564 概率論VI 數理統計基礎3 卡方分布的正態近似)
L≈2+2Z1?α+y?(2+2Zy)2n=Z1?α+y?ZynL \approx \frac{2+2Z_{1-\alpha+y} - (2+2Z_{y})}{2n} = \frac{Z_{1-\alpha+y} - Z_{y}}{n}L≈2n2+2Z1?α+y??(2+2Zy?)?=nZ1?α+y??Zy??
Notice standard normal distribution is symmetric on y-axis, so the shortest length should be y=α2y = \frac{\alpha}{2}y=2α?. Hence the shortest confidential interval is
P(X(1)?χα2,222n≤θ≤X(1)?χ1?α2,222n)=1?αP\left( X_{(1)} - \frac{\chi^2_{\frac{\alpha}{2},2}}{2n} \le \theta \le X_{(1)} - \frac{\chi^2_{1-\frac{\alpha}{2},2}}{2n} \right) = 1 - \alphaP(X(1)??2nχ2α?,22??≤θ≤X(1)??2nχ1?2α?,22??)=1?α
Part d
Posterior kernel of θ\thetaθ is
π(θ∣X)∝exp?(?∑i=1n(Xi?θ))\pi(\theta|\textbf{X}) \propto \exp \left(-\sum_{i=1}^n (X_i - \theta) \right)π(θ∣X)∝exp(?i=1∑n?(Xi??θ))
Compute
∫01θexp?(?∑i=1n(xi?θ))dθ=1n∫01θdexp?(?∑i=1n(xi?θ))=1nθexp?(?∑i=1n(xi?θ))∣01?1n∫01exp?(?∑i=1n(xi?θ))dθ=n?1n2exp?(n?∑i=1nXi)+1n2exp?(?∑i=1nXi)\int_{0}^{1} \theta \exp \left(-\sum_{i=1}^n (x_i - \theta) \right) d\theta = \frac{1}{n}\int_{0}^{1} \theta d\exp \left(-\sum_{i=1}^n (x_i - \theta) \right) \\ = \frac{1}{n}\theta \exp \left(-\sum_{i=1}^n (x_i - \theta) \right)|_0^1 - \frac{1}{n} \int_{0}^{1}\exp \left(-\sum_{i=1}^n (x_i - \theta) \right) d\theta \\ = \frac{n-1}{n^2} \exp \left(n-\sum_{i=1}^n X_i\right)+\frac{1}{n^2}\exp \left(-\sum_{i=1}^n X_i\right)∫01?θexp(?i=1∑n?(xi??θ))dθ=n1?∫01?θdexp(?i=1∑n?(xi??θ))=n1?θexp(?i=1∑n?(xi??θ))∣01??n1?∫01?exp(?i=1∑n?(xi??θ))dθ=n2n?1?exp(n?i=1∑n?Xi?)+n21?exp(?i=1∑n?Xi?)
Marginal density of XXX is
m(x)=∫01exp?(?∑i=1n(xi?θ))dθ=1nexp?(n?∑i=1nXi)?1nexp?(?∑i=1nXi)m(x) = \int_{0}^{1} \exp \left(-\sum_{i=1}^n (x_i - \theta) \right) d\theta = \frac{1}{n} \exp \left(n-\sum_{i=1}^n X_i\right) - \frac{1}{n} \exp \left(-\sum_{i=1}^n X_i\right)m(x)=∫01?exp(?i=1∑n?(xi??θ))dθ=n1?exp(n?i=1∑n?Xi?)?n1?exp(?i=1∑n?Xi?)
So the estimator is
θ^=n?1n2exp?(n?∑i=1nXi)+1n2exp?(?∑i=1nXi)1nexp?(n?∑i=1nXi)?1nexp?(?∑i=1nXi)=(n?1)en+1nen?n\hat\theta = \frac{\frac{n-1}{n^2} \exp \left(n-\sum_{i=1}^n X_i\right)+\frac{1}{n^2}\exp \left(-\sum_{i=1}^n X_i\right)}{ \frac{1}{n} \exp \left(n-\sum_{i=1}^n X_i\right) - \frac{1}{n} \exp \left(-\sum_{i=1}^n X_i\right)} = \frac{(n-1)e^n + 1}{ne^n - n}θ^=n1?exp(n?∑i=1n?Xi?)?n1?exp(?∑i=1n?Xi?)n2n?1?exp(n?∑i=1n?Xi?)+n21?exp(?∑i=1n?Xi?)?=nen?n(n?1)en+1?
總結
以上是生活随笔為你收集整理的UA MATH566 统计理论 QE练习 位置变换后的指数分布的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: UA MATH564 概率论 QE练习
- 下一篇: UA MATH571A 检验异方差的非参