Geometric Brownian Motion, Yet Again!

notes
probability
A heuristic explanation of geometric Brownian motion
Author

Stephen J. Mildenhall

Published

2024-05-02

Here is a heuristic explanation of geometric Brownian motion.

Let \(X_t\) be a stochastic process with an infinitesimal return with mean return \(\mu\) and standard deviation \(\sigma\). Here, \(\mu\) is the annual force of interest. \(X\) evolves according to \[ \frac{dX_t}{X_t} = \mu dt + \sigma dB_t \] where \(B_t\) is a standard Brownian motion, so that \[ dX_t = X_0(1 + \mu\,dt + \sigma dB_t). \] Taking \(X_0=1\), we can write \[ \begin{align} X_t &= \prod (1 + \mu\,dt + \sigma dB_{t}) \\ &= \exp\left( \log \prod (1 + \mu\,dt + \sigma dB_{t}) \right) \\ &= \exp\left( \sum\log(1 + \mu\,dt + \sigma dB_{t}) \right) \\ &= \exp\left( \sum_i\log(1 + \mu\,dt + \sigma \sqrt{dt}Z_i) \right) \\ \end{align} \] where \(Z_i\) are iid standard normals.

The standard approximation for \(\log\) for small \(|x|\) is \[ \log(1+x) =\int \frac{dx}{1+x} = \int 1 - x + x^2 - \cdots\, dx = x - \frac{x^2}{2} + \frac{x^3}{3} -\cdots. \] Therefore taking \(dt=1/n\) and summing \(nt\) terms to \(t\), the approximation reads \[ \begin{align} \log(X_t) &= \sum_{i=1}^{nt}\left( \mu dt+ \sigma \sqrt{dt} Z_i - \frac{\sigma^2}{2}Z_i^2\, dt \right) \\ &= \mu t + \sigma\frac{1}{\sqrt{n}} \sum_{i=1}^{nt} Z_i - \frac{\sigma^2}{2}\frac{1}{n}\sum_{i=1}^{nt} Z_i^2. \end{align} \] As \(n\to\infty\) the middle sum converges in distribution to a normal with mean \(0\) and variance \(t\) by the central limit theorem, and the second sum converges almost surely to \(1\) by the law of large numbers. Hence the approximation converges in distribution to \[ \log(X_t) = \left(\mu -\frac{\sigma^2}{2}\right)t + \sigma \sqrt tZ \] showing that \(X_t\) is lognormal, but not the lognormal you naïvely expect.

The expectation \[ \mathsf E[X_t]= \mathsf E[\exp((\mu-\sigma^2/2)t + \sigma \sqrt tZ)] =\exp(\mu t) \] since \(\exp(\sigma \sqrt tZ)=\exp(\sigma^2t/2)\).

It follows that \(Y_t=\exp(B_t - t/2)\) has expectation \(1\) for all \(t\) and is a martingale. We can see this directly using properties of Brownian motion. For \(s<t\), \[ \begin{align} \mathsf E[Y_t\mid \mathcal F_s] &= \mathsf E[\exp(B_t - t/2) \mid \mathcal F_s] \\ &= \mathsf E[\exp(B_s - s/2 + B'_{t-s} - (t-s)/2) \mid \mathcal F_s] \\ &= Y_s\mathsf E[\exp( B'_{t-s} - (t-s)/2) ] \\ &= Y_s \end{align} \] where \(B'\) is an independent BM.

Notice that \(Y_t\to 0\) almost surely if \(\mu-\sigma^2/2<0\): for \(x>0\) \[ \begin{align} \mathsf P(Y\le x) &= \mathsf P\left(Z \le \frac{\log(x)-(\mu-\sigma^2/2)t}{\sigma\sqrt t} \right) \\ &= 1 - \Phi\left(\frac{\log(x)-(\mu-\sigma^2/2)t}{\sigma\sqrt t} \right) \\ &\to 1 \end{align} \] because the term in parenthesis tends to \(\infty\). \(Y_t\) is an example of a process which converges almost surely but not in \(L^1\) since its mean is \(\exp(\mu t)\), behavior that occurs because the family \(Y_t\) is not uniformly integrable.