Chapter 4
Week 6 lecture 2
Challenges I
- Approximate the derivative of \(f(x) = \exp(-x^2)\) by finite-differencing, and compare your approximation to the true derivative over \(x \in [-1, 1]\).
Challenges II
- Consider the exponential distribution with pdf \[ f(y \mid \lambda) = \lambda \exp(-\lambda y) \] for \(y, \lambda > 0\). Its log-likelihood, for an independent sample \(\mathbf{y} = (y_1, \ldots, y_n)\) is given by \[ \log f(\mathbf{y} \mid \lambda) = n \log \lambda -\lambda \sum_{i = 1}^n y_i. \]
Write a function, loglik(lambda, y)
, to evaluate the log-likelihood above, where \(\lambda =\) lambda
and \(\mathbf{y} =\) y
.
Solution
- Use
rexp(10)
to generate a sample of ten values from the Exp(1) distribution, and then useloglik(lambda, y)
to evaluate \(\log f(\mathbf{y} \mid 2)\) for the sample.
- Then write functions to evaluate the gradient and Hessian matrix of the log-likelihood.
Solution
# gradient, i.e. first derivative
loglik_d1 <- function(lambda, y) {
# function to evaluate gradient of exponential log-likelihood w.r.t. lambda
# lambda is a scalar
# y is a vector
# returns a scalar
n <- length(y)
n / lambda - sum(y)
}
# Hessian, i.e. second derivative
loglik_d2 <- function(lambda, y) {
# function to evaluate Hessian of exponential log-likelihood w.r.t. lambda
# lambda is a scalar
# y is a vector
# returns a 1 x 1 matrix
n <- length(y)
matrix(- n / lambda^2, 1, 1)
}
- Evaluate the gradient and Hessian for the sample generated above with \(\lambda = 2\).
- Check your gradient calculation against an approximation based on finite-differencing.
Week 6 lecture 3
Week 8 lecture 1
Challenges I
- Repeat Challenge I using Simpson’s rule; i.e. approximate \[ I = \int_0^2 \text{exp}\left(-\frac{x^2}{2}\right) \text{d}x \] with \(N = 10\) and find the relative absolute error of your estimate.
Solution
f <- function(x) exp(-x^2/2)
N <- 10
a <- 0
b <- 2
h <- (b - a) / N
simpson <- f(a) + f(b)
x1i <- a + h * (2 * (1:N) - 1) / 2
simpson <- simpson + 4 * sum(f(x1i))
x2i <- a + h * (1:(N - 1))
simpson <- simpson + 2 * sum(f(x2i))
simpson <- h * simpson / 6
true <- sqrt(2 * pi) * (pnorm(b) - .5)
rel_err <- abs((true - simpson) / true)
rel_err
Week 8 lecture 2
Challenges I
- Consider independent Exponential random variables \(X_1\) and \(X_2\), each with pdf \[ f_{X_i}(x \mid \lambda) = \lambda \text{exp}(-\lambda x)~~~\text{for}~x > 0, \] and where \(\lambda > 0\). Estimate \(\text{Pr}(X_1 \leq 1, X_2 \leq 1)\) for \(\lambda = 2\) by numerical integration using the midpoint rule and \(N = 10\) for \(X_1\) and \(X_2\); i.e. approximate \[ \text{Pr}(X_1 \leq 1, X_2 \leq 1) = \int_0^1 \int_0^1 \lambda^2 \text{exp}\left\{-\lambda(x_1 + x_2)\right\} \text{d}x_1\text{d}x_2. \]
Solution
- Find the relative absolute error of your approximation.
Challenges II
- Approximate \(\text{Pr}(X_1 \leq 1, X_2 \leq 2)\), where \(X_1\) and \(X_2\) are independent Exp(\(\lambda\)) random variables with \(\lambda = 2\). This time use Gaussian quadrature and \(N = 7\) for \(X_1\) and \(X_2\).
Solution
- Find the relative absolute error of your approximation.
Week 8 lecture 3
Challenge I
- Using the midpoint rule with \(N = 5\) nodes per variable, approximate \[ I = \int_1^3 \ldots \int_1^3 \prod_{i = 1}^4 \left\{1 + (x_i - 2)^2\right\} \text{d}x_1 \ldots \text{d}x_4 \] and estimate its relative absolute error.
Challenge II
- Consider \(Y \sim \text{Exp}(\lambda)\), so that its pdf is \[ f(y \mid \lambda) = \lambda \text{exp}(-\lambda y) \quad \text{for}~y > 0, \] where \(\lambda\) is a random variable with pdf \[ f(\lambda) = 4 \lambda \text{exp}(-2 \lambda) \quad \text{for}~\lambda > 0. \] Use Laplace’s method to find \(f(y)\), where \[ f(y) = \int_0^\infty f(y \mid \lambda) f(\lambda) \text{d} \lambda. \]
Solution
We first notice that, given Equation (4.4), we take \(f()\) to be \(-\log f(y, \lambda)\) and \(x\) to be \(\lambda\), as that’s what we’re integrating out.
Then we get \(f(y, \lambda) = f(y \mid \lambda) f(\lambda) = \lambda \text{e}^{-\lambda y} \times 4 \lambda \text{e}^{-2 \lambda} = 4 \lambda^2 \text{e}^{-\lambda(2 + y)}\)
So \(-\log f(y, \lambda) = -2 \log \lambda + \lambda(2 + y) - \log 4\)
Then \(-\dfrac{\partial \log f(y, \lambda)}{\partial \lambda} = -\dfrac{2}{\lambda} + (2 + y)\), which is minimised at \(\tilde \lambda = 2 / (2 + y)\)
Also \(-\dfrac{\partial^2 \log f(y, \lambda)}{\partial \lambda^2} = 2 / \lambda^2\) and \(2 / \tilde \lambda^2 = 2 / (4 / (2 + y)^2) = (2 + y)^2 / 2\)
So, if \(g''(\lambda)\) denotes \(-\partial^2 \log f(y, \lambda) / \partial \lambda^2\), then \[\begin{align*} I_n &= f(y, \tilde \lambda) \sqrt{\dfrac{2 \pi}{g''(\tilde \lambda)}}\\ &= 4 \hat \lambda^2 \text{e}^{-\hat \lambda(2 + y)} \sqrt{2\pi \times 2(2 + y)^{-2}} = \dfrac{32 \sqrt{\pi}}{\text{e}^2(2 + y)^3} \end{align*}\]
Week 9 lecture 1
Challenges I
- Use Monte Carlo integration with \(N = 10^3\) Monte Carlo samples to approximate \[ \int_{-1}^1 x^2 \text{d} x \] and then calculate the relative absolute error of your approximation.