Chapter 4
Week 6 lecture 2
Challenges I
- Approximate the derivative of \(f(x) = \exp(-x^2)\) by finite-differencing, and compare your approximation to the true derivative over \(x \in [-1, 1]\).
Challenges II
- Consider the exponential distribution with pdf \[ f(y \mid \lambda) = \lambda \exp(-\lambda y) \] for \(y, \lambda > 0\). Its log-likelihood, for an independent sample \(\mathbf{y} = (y_1, \ldots, y_n)\) is given by \[ \log f(\mathbf{y} \mid \lambda) = n \log \lambda -\lambda \sum_{i = 1}^n y_i. \]
Write a function, loglik(lambda, y), to evaluate the log-likelihood above, where \(\lambda =\) lambda and \(\mathbf{y} =\) y.
Solution
- Use
rexp(10)to generate a sample of ten values from the Exp(1) distribution, and then useloglik(lambda, y)to evaluate \(\log f(\mathbf{y} \mid 2)\) for the sample.
- Then write functions to evaluate the gradient and Hessian matrix of the log-likelihood for derivatives w.r.t. \(\lambda\).
Solution
# gradient, i.e. first derivative
loglik_d1 <- function(lambda, y) {
# function to evaluate gradient of exponential log-likelihood w.r.t. lambda
# lambda is a scalar
# y is a vector
# returns a scalar
n <- length(y)
n / lambda - sum(y)
}
# Hessian, i.e. second derivative
loglik_d2 <- function(lambda, y) {
# function to evaluate Hessian of exponential log-likelihood w.r.t. lambda
# lambda is a scalar
# y is a vector
# returns a 1 x 1 matrix
n <- length(y)
matrix(- n / lambda^2, 1, 1)
}- Evaluate the gradient and Hessian for the sample generated above with \(\lambda = 2\).
- Check your gradient calculation against an approximation based on finite-differencing.
Week 6 lecture 3
Challenges I
- Use the midpoint rule to approximate \[ I = \int_0^2 \text{exp}\left(-\frac{x^2}{2}\right) \text{d}x \] with \(N = 10\) integration nodes.
Solution
- Find the relative absolute error of your estimate. [Hint: notice the resemblance of the integrand to the Normal(0, 1) distribution’s pdf.]