C. General Examples
\[ \def\E{\mathsf E} \def\F{\mathcal F} \def\P{\mathsf P} \def\R{\mathbb R} \]
Introduction | Probability background | Stochastic processes | • | Incurred loss processes
1 New from Old
New stochastic processes can be created from old ones in several ways:
- Linear combinations
- Running maximum or minimum
- Reflected \(\max_{s\le t} X_s - X_t\)
- Add or subtract a function, e.g., a compensator
- Apply a function
- Subordination and time-change
2 Convergence Behaviors
- Non-uniformly integrable martingale without convergence:
- Example: Standard Brownian motion \(B_t\).
- Here, \(B_t\) is a martingale that is not uniformly integrable and does not converge almost surely as \(t \to \infty\). This is due to the fact that its variance \(t\) keeps increasing indefinitely, allowing the process to spread out over time without settling towards any limit.
- Non-uniformly integrable martingale that converges almost surely:
- Example: Compensated exponential Brownian motion \(X_t = e^{B_t - t^2/2}\).
- \(X_t\) converges almost surely to zero as \(t \to \infty\). This convergence is due to the superexponential decay term \(-t^2/2\) dominating the linear growth of the Brownian motion in the exponent.
- Despite this almost sure convergence, \(X_t\) is not uniformly integrable and \(X_t \neq \mathbb{E}[0 \mid \mathcal{F}_t]\) since \(X_t\) itself is not identically zero for finite \(t\).
- Generally, geometric Brownian motion, modeled as \(X_t = X_0 e^{(\mu - \frac{1}{2} \sigma^2)t + \sigma W_t}\), tends toward zero as \(t \to \infty\) if \(\mu < \frac{1}{2} \sigma^2\).
- Uniformly integrable martingale:
- Characteristics: A martingale \(X_t\) is uniformly integrable if there exists an integrable random variable \(X\) such that \(X_t = \mathbb{E}[X \mid \mathcal{F}_t]\) almost surely, and the martingale converges both in \(L^1\) and almost surely to \(X\).
- Example conditions: If \(|X_t|\) is bounded by an integrable random variable, or more generally, if \(\sup_t \mathbb{E}[|X_t|^p] < \infty\) for some \(p > 1\), then \(X_t\) is UI. This ensures that the martingale not only converges almost surely but also in expectation, to a well-defined limit \(X\).
- Breakdown of \(L^2\) or higher \(L^p\) convergence:
- \(L^2\) Convergence: The process \(X_t = \exp(B_t - t^2/2)\) does not converge in \(L^2\) because the exponential growth driven by \(B_t\) in part outweighs the exponential decay from the \(-t^2/2\) term (variance grows like \(\exp(4t^2\sigma^2/2)\)). This discrepancy means that while the process dies out almost surely, its variance or second moment explodes, highlighting a breakdown in \(L^2\) convergence. (SURE?)
- Stable LP with \(\alpha < 1\): In this scenario, where the jumps are heavy-tailed and the mean does not exist:
- No Scale: The sample paths lack a characteristic scale due to the absence of a mean. This leads to self-similar properties where, regardless of how you stretch or compress the time axis, the process’s qualitative behavior looks similar. This scale invariance means that it is challenging to determine from a snapshot of the sample path whether it represents a short interval or a long interval.
- Visual Characterization: The paths would be marked by extreme jumps with no apparent “central line” or trend to provide a reference scale. The jumps can vary so widely that any section of the path might look like any other when rescaled, making it difficult to gauge the time period from the path alone.
- Impossible to tell how long a sample you’re looking at due to the absence of a scale or central tendency in the paths.
- Stable LP with \(1 < \alpha < 2\): Here, the process has a finite mean but infinite variance:
- Presence of a Mean Trend: Since the jumps have a mean, the cumulative effect of these jumps over time introduces a discernible trend, roughly proportional to \(tE[\text{jumps}]\). This mean trend provides a scaling factor against which the time scale can be inferred.
- Visual Characterization: The sample paths will show periods of relative calm interspersed with significant jumps. However, unlike the case for \(\alpha < 1\), these jumps accumulate in a way that tends to drift around a line determined by \(tE[\text{jumps}]\), making it possible to infer something about the length of time represented in a sample of the path.
- The existence of a mean but not a variance creates a trend along \(tE[\text{jumps}]\), providing some insight into the temporal scale of the observed sample path.
- Non-trivial Lévy Processes:
- Non-trivial Lévy processes, particularly those with a compound Poisson component, have jumps that contribute to the overall variance and heavy-tail behavior. The increments are independent and stationary, so the distribution of \(X_t\) for each \(t\) doesn’t “reset” but rather builds upon the previous state. This effectively results in a distribution funnel that is self-similar over time, modified by the size and rate of jumps.
- Markov Behavior:
- Markov processes can be visualized effectively in both pathwise and distribution views:
- Pathwise: A Markov process only depends on its current state, not on how it arrived there, which can be visualized as paths that have the “memoryless” property, not influenced by past values beyond the current state.
- Distribution View: For the distribution view, the transition probabilities or densities define the evolution of the distribution over time. The state at each \(t\) only depends on the state at \(t-\delta t\), and not on any earlier times, which can be visualized as a flow or shift of the distribution based on the transition rules.
- Markov processes can be visualized effectively in both pathwise and distribution views:
3 Properties of the Difference Sequence Representation
Let \(X\) be a random variable on a standard probability space and let \(\F_t\) be a filtration on the sigma algebra. Set \(X_t = \E[X \mid \F_t]\) to be the conditional expectation of \(X\) given information \(\F_t\) available at time \(t\). Set \(D_0 = \E[X]\) and \(D_t = X_t - X_{t-1}\) to be the incremental distributions “revealed” at time \(t\), so that \(X = \sum_t D_t\) is the sum of the \(D_t\)’s. What are the multivariate properties of \(D_t\)?
\(D_t = X_t - X_{t-1}\) are generally uncorrelated for different \(t\) but not independent. This is because the conditional expectation incorporates the information up to time \(t\), making the increments \(D_t\) adapted to the filtration \(\F_t\), and hence, they capture the new information not present at \(t-1\). However, being uncorrelated does not imply independence in this context as the increments can still be influenced by the underlying filtration’s structure.
3.1 Examples of Behavior:
Martingales: If \(X_t\) is a martingale with respect to \(\F_t\), then \(\E[D_t \mid \F_{t-1}] = \E[X_t - X_{t-1} \mid \F_{t-1}] = 0\). Martingales exhibit a “fair game” property where future increments are zero on average, given the past.
Submartingales and Supermartingales: For submartingales, \(\E[D_t \mid \F_{t-1}] \geq 0\), and for supermartingales, \(\E[D_t \mid \F_{t-1}] \leq 0\). These processes model scenarios where there’s a tendency for the variable to increase or decrease on average over time.
Markov Processes: If \(X_t\) is a Markov process, then \(X_t\) is dependent only on its immediate past \(X_{t-1}\) (given the Markov property). However, the increments \(D_t\) can still be dependent on the past through the state of \(X_{t-1}\).
Random Walks: In the case of a simple symmetric random walk, the increments \(D_t\) are independent and identically distributed. However, this is a specific case and not generally true for all processes adapted to a filtration.
In essence, while the uncorrelated nature of \(D_t\) for different \(t\) is common in the context of filtrations and conditional expectations, the dependence structure can vary widely depending on the specifics of the process \(X_t\) and the filtration \(\F_t\).
3.2 Correlation
The \(D_t = X_t - X_{t-1}\) are not always uncorrelated. Their correlation depends on the nature of the process \(X_t\) and the filtration \(\F_t\). While martingales are a classic example where the increments \(D_t\) are orthogonal (hence, uncorrelated) in \(L^2\) space, this property does not universally apply to all stochastic processes.
For general processes, \(D_t\) and \(D_s\) for \(t \neq s\) could be correlated. The correlation depends on how the information in the filtration \(\F_t\) relates to the process \(X\). If the process has a memory or if the future values depend on the past values in a way that is not captured by the martingale or Markov property, then the increments could be correlated.
3.4 Examples
Consider a simple symmetric random walk \(X_t\) which is a martingale. Here, \(X_t = X_{t-1} + \epsilon_t\), where \(\epsilon_t\) are i.i.d. random variables taking values \(-1\) and \(1\) with equal probability \(0.5\). The filtration \(\F_t\) is generated by \(X_0, X_1, ..., X_t\).
Let’s define the increments as \(D_t = X_t - X_{t-1} = \epsilon_t\). While these increments are uncorrelated, they are also independent by the construction of the random walk. To show where independence fails, we need a slightly more complex example.
Consider a martingale where the increments depend on the past in a non-linear way but still maintain the martingale property:
Let’s define a process \(Y_t\) such that \(Y_t = X_t^2 - t\), where \(X_t\) is our simple symmetric random walk. \(Y_t\) is a martingale with respect to the same filtration \(\F_t\).
Now, consider the increments \(D_t = Y_t - Y_{t-1}\). These increments are:
\(D_t = (X_t^2 - t) - (X_{t-1}^2 - (t-1)) = X_t^2 - X_{t-1}^2 - 1\)
Given that \(X_t = X_{t-1} + \epsilon_t\), we can express \(D_t\) in terms of \(\epsilon_t\):
\(D_t = (X_{t-1} + \epsilon_t)^2 - X_{t-1}^2 - 1 = 2X_{t-1}\epsilon_t + \epsilon_t^2 - 1\)
The term \(2X_{t-1}\epsilon_t\) introduces a dependency on the past value \(X_{t-1}\), making the increments \(D_t\) dependent on the history of the process, not just the last step. Despite \(Y_t\) being a martingale (its expectation conditioned on the past equals its current value), the increments \(D_t\) are not independent because they are affected by \(X_{t-1}\), the previous state of the random walk. This example illustrates how increments can be uncorrelated (due to the martingale property) but not independent, as their values are influenced by past events in the process.
4 The Variance Gamma Process
The variance gamma (VG) stochastic process, often used in financial mathematics, is a model developed to capture the leptokurtic nature of asset returns and the presence of observed skewness, differing from the normal distribution assumption typically seen in simpler models like the Black-Scholes.
The VG process can be understood as a Brownian motion with drift where the time parameter is itself a random variable, following a gamma distribution. This stochastic time change introduces jumps and higher moments (skew and kurtosis) into the model, which are valuable in describing real-world asset return dynamics more accurately.
The mathematical representation of the VG process involves three parameters:
- θ (Theta): drift of the Brownian motion.
- σ (Sigma): volatility of the Brownian motion.
- ν (Nu): the variance rate of the gamma process, which controls the rate of arrival of the jumps.
Formally, the VG process \(X(t)\) can be defined as: \[X(t) = \theta G(t, 1/\nu) + \sigma W(G(t, 1/\nu))\] where \(G(t, 1/\nu)\) is a gamma process and \(W(\cdot)\) is a standard Brownian motion.
This setup allows the VG process to model returns with asymmetric behavior and heavier tails, making it suitable for option pricing and risk management applications where normality does not hold.
The variance gamma (VG) process exhibits several key properties:
Martingale Property: The VG process is not necessarily a martingale by default. Whether it acts as a martingale depends on the choice of parameters. Typically, a VG process \(X(t)\) defined as \[ X(t) = \theta G(t, 1/\nu) + \sigma W(G(t, 1/\nu)) \] can be a martingale if the drift component \(\theta\) of the Brownian motion component is zero. If \(\theta \neq 0\), the process will have a drift and hence will not be a martingale.
Markov Property: The VG process is a Markov process. This is due to its construction through a time-changed Brownian motion, where the time change \(G(t, 1/\nu)\) and the Brownian motion \(W(t)\) are independent processes. The future values of \(X(t)\) depend only on the present value and not on the path taken to arrive at that value.
Lévy Process: The VG process is a particular case of a Lévy process, which are processes with stationary and independent increments. This property is significant for financial modeling, as it helps in representing the jumps and stochastically varying volatility in asset prices.
Infinitely Divisible: As a Lévy process, the VG process is also infinitely divisible, which means that for any positive integer \(n\), there exists a distribution such that the sum of \(n\) independent and identically distributed random variables has the same distribution as \(X(t)\).
These properties make the VG process versatile for financial modeling, particularly in capturing the leptokurtosis and skewness in asset returns, and for extending beyond the log-normal assumption prevalent in models like Black-Scholes.
5 Table of Examples
Recall:
- A Lévy process (LP) is a Markov process (MP)
- A LP with no drift is a martingale (MG) provided it is integrable and the jump component is compensated
- A martingale is a local martingale (LM)
- A LM and a LP is a semimartingale (SM)
Process Name | Martingale | Local Martingale | Markov Process | Lévy Process |
---|---|---|---|---|
Brownian Motion (BM) | Yes | Yes | Yes | Yes |
BM with Drift | No (drift) | Yes | Yes | Yes |
Stochastic Exponential BM | No (drift) | Yes | No | No |
As above \(\mu=-\sigma^2/2\) | No | Yes | No | No |
Reflected BM | No (drift) | Yes | Yes | No |
Running Max BM | No (drift) | Yes | No (depends on path) | No |
\(BM^2\) (Squared BM) | No (drift) | Yes | Yes | No |
\(BM^2-t\) | Yes | Yes | Yes | No |
Random Walk (+/-1 increments equally likely) | Yes | Yes | Yes | Yes |
Random Walk (\(p\not=1/2\)) | No (biased) | Yes | Yes | No (non-stationary) |
Poisson Process | No (drift) | No | Yes | Yes |
Compensated Poisson Process | Yes, if integrable | Same | Yes | Yes |
Variance Gamma Process | Depends on drift | Same | Yes | Yes, subordinated |
Bessel Process (2-dimensions) | Yes | Yes | Yes | No (not independent increments) |
Bessel Process (3-dimensions) | No (not integrable) | Yes | Yes | No (not independent increments) |
Stopped inverse Bessel process | ||||
Inverse Gaussian Process | No (drift) | No (non-linear drift) | Yes | Yes |
Introduction | Probability background | Stochastic processes | • | Incurred loss processes