Knowledge is the Only Good
  • About

On this page

  • Definition
  • Instantaneous interpretation
  • Does it “look wrong”?
  • Why it “looks right”
  • Examples

Compensators

notes
llm
probability
mathematics
Author

Stephen J. Mildenhall

Published

2024-04-12

Definition

Given an exponential random variable \(T\) with the hazard function \(\alpha(t) = t\), the compensator \(A_t\) is defined as \(A_t = \min(t, T)\). Observe that \(N_t\) is either 0 or 1, where \(N_t = 1\) if \(t \geq T(\omega)\) and 0 otherwise. This setup defines a one-point process \(N_t\).

The compensator \(A_t\) is designed to make \(N_t - A_t\) a martingale. To understand why \(N_t - A_t\) is a martingale, recall the definition: a process \(M_t\) is a martingale with respect to a filtration \(\mathcal{F}_t\) if for all \(s \leq t\), \(\mathsf{E}[M_t | \mathcal{F}_s] = M_s\).

Here’s the intuition behind \(N_t - A_t\) being a martingale:

  • When \(t < T\), \(N_t = 0\) and \(A_t = t\), so \(N_t - A_t = -t\). As time progresses until \(T\), \(A_t\) increases linearly, countering the jump that will happen in \(N_t\) at \(T\).
  • When \(t = T\), \(N_t\) jumps from 0 to 1, but at the same time, \(A_t\) stops increasing at \(T\), so \(N_t - A_t = 1 - T\).
  • For \(t > T\), \(N_t = 1\) and \(A_t = T\), so \(N_t - A_t\) stays at \(1 - T\).

In essence, the compensator \(A_t\) “anticipates” the jump of \(N_t\) such that the expected increment in \(N_t - A_t\) given the past (up to \(s < t\)) is zero, which is the martingale property. This balancing act makes \(N_t - A_t\) a martingale.

Instantaneous interpretation

Understanding \(N_t - A_t\) using an infinitesimal time period \(dt\) is quite insightful and aligns with the intuition behind stochastic calculus, particularly in the context of martingales. For \(t<T\):

  1. Instantaneous Change in \(N_t\): Over a very short time interval \(dt\), the process \(N_t\) has a small probability \(dt\) of jumping from 0 to 1 (since the hazard function is \(t\), the probability is proportional to the length of the interval). If it jumps, the change is 1; otherwise, the change is 0. So, the expected change in \(N_t\) over this interval is roughly \(dt\).
  2. Instantaneous Change in \(A_t\): The compensator \(A_t\) is designed to match this expected change in \(N_t\). Over the interval \(dt\), \(A_t\) increases by \(dt\) when \(t < T\). This increment is the “compensation” for the expected jump in \(N_t\).
  3. Balance Between \(N_t\) and \(A_t\): Because \(A_t\) increases by \(dt\) while the expected increase in \(N_t\) is also \(dt\), the expected change in \(N_t - A_t\) over the interval \(dt\) is 0. This is exactly what you need for a martingale: the expected change, given all the past information, is zero.
  4. Integral Over Time: When you integrate these infinitesimal changes over time, you sum up all the little expected changes (which are zero) to confirm that \(N_t - A_t\) is a martingale.

When \(t\ge T\) there are no further changes in \(N\) or \(A\).

This reasoning aligns with the continuous-time martingale theory and provides a solid foundation for understanding why \(N_t - A_t\) behaves as a martingale, despite the non-intuitive appearance of its paths.

Does it “look wrong”?

The key to understanding why \(N_t - A_t\) is a martingale despite its appearance (lines with a slope -1 getting more and more negative) lies in the definition of a martingale and the conditional expectation.

A martingale doesn’t necessarily have to “look” like it has no drift along a single path. Instead, the requirement is that the expected future value across all paths, given all past information, is equal to the current value. This is a probabilistic statement about expectations, not a deterministic one about the paths themselves.

For \(N_t - A_t\), here’s what’s happening:

  1. Before \(T\): The process \(N_t - A_t = -t\) has a downward drift. However, the expectation at any future time \(t\) given the past is still centered around the current value because the compensator \(A_t\) is increasing exactly in a way that offsets the expected increase in \(N_t\).
  2. At \(T\): There’s a jump in \(N_t\), but this is precisely balanced by the compensator \(A_t\), which stops increasing after \(T\).
  3. After \(T\): Both \(N_t\) and \(A_t\) are constant, so \(N_t - A_t\) has zero drift.

The essence of the martingale property here is that the compensator \(A_t\) is designed to account for the “average” behavior of \(N_t\). While individual paths of \(N_t - A_t\) may appear to have a drift, when you condition on the past information, the expected change in \(N_t - A_t\) averaged over all paths is zero, which is the core of the martingale definition.

Why it “looks right”

If you simulate and plot multiple realization paths of \(N_t - A_t\), you see a variety of behaviors. Some paths would have the jump in \(N_t\) occurring early (before \(t = 1\)), resulting in \(N_t - A_t\) being positive for a significant portion of the time afterward. Other paths might have the jump occurring much later, resulting in \(N_t - A_t\) being negative right up until the jump happens.

However, the key point is that when you take the average of many such paths at any fixed future time \(t\), the positive and negative deviations cancel out, leading to an average value of zero—the starting value. A similar argument applies to changes from a future time \(s\). This aligns with the definition of a martingale: the expected value of \(N_t - A_t\) at any future time, given the past, is equal to its current value, which is zero at the start.

While individual paths might show significant up or down deviations, the “average path” when considering a large number of simulations would hover around zero, reflecting the martingale property that the expected change in \(N_t - A_t\), conditioned on the past, is zero at any point in time.

Examples

Code
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as ss

fz = ss.expon()

# failure times
T = fz.rvs((10000, 1))

# sampling times
ts = np.linspace(0, np.max(T), 101)

# individual paths
paths = np.where(T > ts, 0, 1)

# compensated process
M = paths - np.minimum(ts, T)

fig0, ax0 = plt.subplots(1, 1, figsize=(5, 3.5))
fig1, ax1 = plt.subplots(1, 1, figsize=(5, 3.5))

for v in M[:2500]:
    ax0.plot(ts, v, lw=.5, c='C0', alpha=.05)

ax1.hist(M[:, -1], bins=25, lw=.5, ec='w');
(a) Plot of sample paths \(N-A\), the longitudinal view.
(b) Histogram of last observed value, the cross-sectional, distributed around 0 as expected.
Figure 1: Compensated one-point process showing sample paths (longitudinal and cross-sectional outcomes.

Note:

Code
print(f'Sample mean at t={ts[-1]:.3f} equals {np.mean(M[:,-1]):.4f} vs. expected 0.0.')
Sample mean at t=9.111 equals 0.0009 vs. expected 0.0.
Figure 1 (a): Plot of sample paths \(N-A\), the longitudinal view. Figure 1 (b): Histogram of last observed value, the cross-sectional, distributed around 0 as expected.

Stephen J. Mildenhall. License: CC BY-SA 2.0.

 

Website made with Quarto