Doob’s Upcrossing Lemma
Doob’s Optional Stopping Theorem
Doob’s optional stopping theorem is a fundamental result in probability theory, particularly in the theory of martingales. The theorem addresses the expected value of a martingale at a stopping time. Here are the three key cases under which the theorem holds:
Bounded Stopping Time: If the stopping time \(\tau\) is almost surely bounded, that is, there exists a constant \(T\) such that \(\tau \leq T\) almost surely.
Bounded Martingale: If the martingale \(X_t\) is bounded, that is, there exists a constant \(M\) such that \(|X_t| \leq M\) for all \(t\).
Integrable Stopping Time and Uniform Integrability: If \(\tau\) is a stopping time such that \(E[\tau] < \infty\), and the family of random variables \(\{X_{t \wedge \tau}\}\) is uniformly integrable. (Williams phrases as \(|X_n - X_{n-1}| \le K\) for all \(n\).)
Under any of these conditions, the theorem states that the expected value of the martingale at the stopping time \(\tau\) is equal to its expected value at time \(0\), i.e., \(E[X_\tau] = E[X_0]\).
Counter-Examples
Here are three examples where Doob’s optional stopping theorem fails when its conditions are not met:
1. Bounded Stopping Time Not Satisfied
Consider a simple random walk \(X_n\) starting at \(0\), where each step is either \(+1\) or \(-1\) with equal probability \(p = 0.5\). Define the stopping time \(\tau\) as the first time \(n\) such that \(X_n = 1\). This stopping time is not bounded since there is a non-zero probability that the random walk takes an arbitrarily large number of steps to first reach \(1\).
Failure: The expected time to stop, \(E[\tau]\), is infinite in this case, thus violating the bounded stopping time condition. Although \(E[X_\tau] = 1\) (since it stops at \(1\)), the initial expectation \(E[X_0] = 0\).
2. Bounded Martingale Not Satisfied
Consider \(M_t\), a standard Brownian motion. Here, \(M_t\) is a martingale but is not bounded as \(t \to \infty\). Define the stopping time \(\tau = \inf\{t \geq 1 : M_t = t \}\).
Failure: The martingale \(M_t\) is unbounded, and the stopping time \(\tau\) can be infinite, as there may be no \(t\) such that \(M_t = t\). The expectations \(E[M_\tau] \ge 1\) and \(E[M_0]=0\) are not equal, and the theorem fails because of the unbounded nature of \(M_t\).
3. Integrable Stopping Time and Uniform Integrability Not Satisfied
Consider the martingale \(X_n = 2^n \cdot I(A_n)\) where \(I\) is the indicator function, and \(A_n\) is an event with probability \(2^{-1}\). Define the stopping time \(\tau\) as the first head, which is geometrically distributed with \(p = 0.5\) and \(E[\tau] = 2\), which is finite.
Failure: Although \(E[\tau] < \infty\), the sequence \(X_{n \wedge \tau}\) is not uniformly integrable because the values of \(X_n\) grow exponentially upon stopping, and this can lead to an infinite expected value \(E[X_\tau]\) depending on when the first head appears. Hence, \(E[X_\tau] \neq E[X_0] = 0\).
These examples illustrate the importance of the conditions under which Doob’s optional stopping theorem holds, and what can happen when they are not met.
Doobs Upcrossing Lemma
Doob’s upcrossing lemma is a significant result in the theory of martingales, particularly useful in proving the convergence of certain stochastic processes. Here is what the lemma states:
Doob’s Upcrossing Lemma: Consider a submartingale \(X_n\) and two real numbers \(a < b\). Define an “upcrossing” of the interval \([a, b]\) by \(X_n\) as an occurrence where the process starts below \(a\) and reaches or exceeds \(b\) before falling below \(a\) again. Let \(U_n(a, b)\) be the number of upcrossings of the interval \([a, b]\) by the sequence \(X_1, X_2, \ldots, X_n\).
The lemma states that the expected number of upcrossings of the interval \([a, b]\) by the process up to time \(n\) is bounded \[E[U_n(a, b)] \leq \frac{E[(X_n - a)^-]}{b - a}\] where \((x)^- = \max(-x, 0)\) denotes the negative part of \(x\).
To see this, use the bet when \(X\) drops below \(a\) until it rises above \(b\). If \(Y_N\) denotes total winnings until time \(N\), then \(Y_N\) is greater than \((b-a)U_N\) (from the \(U_N\) upcrossings) less \((X_N-a)^-\) (the maximum possible loss from the last “open” bet) \[ Y_N(\omega) \ge (b-a)U_N(a,b)(\omega) - (X_N(\omega)-a)^-. \] The betting strategy is previsible (known before it needs to be placed), bounded (only bet 1 or 0), and positive. Therefore if \(X\), the cumulative winning processes, is a supermartingale (downward drift) then the total winnings are \(\le 0\) on any strategy. Therefore \(E[Y_N]\le 0\) and so \[ (b-a)E[U_N] \le E[ (X_N(\omega)-a)^-]. \] If \(X\) is UI (bounded in \(L^1\)) and \(U_\infty=\lim_N U_N\) then, using Jensen’s inequality and applying monotone convergence gives \[ (b-a)E[U_\infty] \le a + \sup_n E[|X_N|] < \infty. \] In particular there can be only finitely many upcrossings: a martingale cannot oscillate indefinitely and must converge.
Doob’s Forward Convergence Theorem
Doob’s forward convergence theorem for uniformly integrable (UI) supermartingales is a cornerstone in the theory of stochastic processes, particularly in the study of martingales and their convergence properties. The theorem states the following:
Doob’s Forward Convergence Theorem: Let \((X_n)_{n \geq 1}\) be a supermartingale that is uniformly integrable. Then, there exists a random variable \(X_\infty\) such that \(X_n\) converges to \(X_\infty\) almost surely and in \(L^1\).
Explanation:
Uniform Integrability: This condition is crucial for the \(L^1\) convergence and ensures that the tail probabilities of the supermartingale’s distribution become negligible. Uniform integrability implies that the expectations of the supremum of the absolute values that exceed any finite level is finite.
Supermartingale: A supermartingale is a sequence of random variables where the expected value of the next variable in the sequence, given all prior ones, is at most equal to the current variable. Mathematically, \(E[X_{n+1} \mid \mathcal{F}_n] \leq X_n\) for all \(n\), where \(\mathcal{F}_n\) is the filtration up to time \(n\).
Almost Sure Convergence: The supermartingale \((X_n)\) converges almost surely to \(X_\infty\), meaning that for almost every outcome in the probability space, the sequence \(X_n(\omega)\) converges to \(X_\infty(\omega)\) as \(n \to \infty\).
\(L^1\) Convergence: Besides almost sure convergence, the theorem also guarantees convergence in the mean. This implies \(\lim_{n \to \infty} E[|X_n - X_\infty|] = 0\).
If \(X\) is only bounded in \(L^1\), \(\sup_n E[|X_n|]<\infty\) then the convergence is only a.s., and not necessarily \(L^1\). E.g., exponential of BM.
This theorem is particularly useful because it assures convergence under the broad scenario of uniform integrability, which is often applicable in practical situations involving supermartingales, such as certain types of financial models and gambling strategies.
The proof uses the fact there are only finitely many upcrossings of any interval. The set of \(\omega\) where \(X\) does not converge can be written \[ \{\omega \mid \liminf X_n < \limsup X_n \} = \bigcup_{a,b\in Q, a<b} \{\omega \mid \liminf X_n < a < b < \limsup X_n \} = \bigcup \Lambda_{a,b}. \] But \(X\) must have infinitely many upcrossings on each \(\Lambda_{a,b}\) and therefore they each have probability zero giving a.s. convergence. The limit is integrable by Fatou’s lemma \[ E[|X_\infty|]=E[\liminf |X_n|] \le \liminf E[|X_n|]\le \sup E[|X_n|] < \infty. \]
See Williams.