B. Stochastic Processes
\[ \def\E{\mathsf E} \def\F{\mathcal F} \def\P{\mathsf P} \def\R{\mathbb R} \]
Introduction | Probability background | • | General examples | Incurred loss processes
1 Definitions
1.1 Markov Process
A stochastic process \(\{X_t\}_{t \geq 0}\) indexed by time \(t\) on a probability space \((\Omega, \F, \P)\) is a Markov process with respect to a filtration \(\{\F_t\}_{t \geq 0}\) (where \(\F_t\) typically represents the history of the process up to time \(t\)) if for every \(s\) and \(t\) with \(0 \leq s < t\) and for every measurable set \(A\) in the Borel σ-algebra of the state space, the following condition holds: \[ \P(X_t \in A \mid \F_s) = \P(X_t \in A \mid X_s) \]
This definition asserts that the conditional probability of the process being in a certain state at a future time \(t\), given all the available information up to time \(s\) (represented by \(\F_s\)), depends only on the state of the process at time \(s\) (\(X_s\)). This captures the Markov property or memoryless property, meaning the future is independent of the past given the present.
The key point here is that the Markov property simplifies the dependency structure of the process significantly: the future state only depends on the current state and not on how that state was reached.
1.2 Martingale
A martingale is a stochastic process that models a fair game. For a stochastic process \(\{X_t\}_{t \geq 0}\) to be a martingale, the expected value of the next state given all previous states must equal the present state, i.e., it has an expected value of zero for any change in the process. Formally, a process \(X_t\) is a martingale with respect to a filtration \(\F_t\) (an increasing sequence of σ-algebras) if for every \(s < t\), it satisfies: \[ \E[X_t \mid \F_s] = X_s. \] Each \(X_t\) must be integrable: the expected value of the absolute value of \(X_t\) is finite \[ \E[|X_t|] < \infty. \]
This condition ensures that the expected values considered in the martingale property are well-defined. It’s crucial for preventing scenarios where the expected values could become infinite, which would make the mathematical treatment and interpretation of the process problematic. The integrability condition is a fundamental requirement for the rigorous mathematical definition of martingales.
A submartingale increases slower than a martingale \(X_s \le \E[X_t \mid \F_s]\).
A supermartingale increases faster than a martingale \(X_s \ge \E[X_t \mid \F_s]\).
1.3 Local Martingale
A local martingale is a type of stochastic process that relaxes certain strict requirements of a martingale, specifically allowing more general behavior over short intervals while retaining some key martingale-like properties under certain conditions.
A stochastic process \(X = \{X_t, t \geq 0\}\) defined on a filtered probability space \((\Omega, \F, \P, \{\F_t\}_{t \geq 0})\) is called a local martingale if there exists a sequence of stopping times \(\{\tau_n\}\) increasing to infinity almost surely, such that each stopped process \(X^{\tau_n} = \{X_{t \wedge \tau_n}, t \geq 0\}\) is a martingale. Here, \(t \wedge \tau_n\) denotes the minimum of \(t\) and \(\tau_n\). Note that some authors, for example, Protter and Jacod and Sh, require that the stopped processes are UI.
1.3.1 Key Aspects of the Definition:
Stopping Times \(\tau_n\): These are non-decreasing times chosen such that they tend to infinity as \(n\) increases. The choice of these stopping times depends on the behavior of the process \(X\), typically chosen to ensure that the integrability and martingale properties hold up to these times.
Stopped Process \(X^{\tau_n}\): By stopping the process at \(\tau_n\), you essentially freeze \(X\) at time \(\tau_n\) for all \(t \geq \tau_n\). This truncation is crucial to ensuring that each segment \(X_{t \wedge \tau_n}\) satisfies the martingale properties (i.e., the conditional expectation property and integrability).
Martingale Property of Stopped Processes: For each \(n\), the process \(X^{\tau_n}\) must be a martingale. This means for all \(s \leq t\) and for all \(n\), \[\E[X_{t \wedge \tau_n} \mid \F_s] = X_{s \wedge \tau_n}\] and also that \(\E[|X_{t \wedge \tau_n}|] < \infty\).
1.3.2 Importance and Usage:
Local martingales are a significant generalization of martingales. They are crucial in stochastic calculus, particularly in the formulation and solution of stochastic differential equations where classical martingales may not suffice due to issues like non-integrability or more complex dynamic behaviors. Local martingales retain the essential “fair game” property of martingales over small intervals or under certain conditions, making them suitable for a broader range of applications, including financial modeling and risk assessment.
1.4 Semimartingale
A semimartingale is a class of stochastic processes that broadens the scope of martingales to include a wider variety of processes. This class is crucial for the development of stochastic calculus, particularly for defining stochastic integrals and solving stochastic differential equations.
1.4.1 Formal Definition:
A stochastic process \(X = \{X_t, t \geq 0\}\) defined on a filtered probability space \((\Omega, \mathcal{F}, \mathbb{P}, \{\mathcal{F}_t\}_{t \geq 0})\) is called a semimartingale if it can be decomposed into the sum of a local martingale and a finite variation process. Formally, it can be expressed as: \[ X_t = M_t + A_t \]
where:
- \(M_t\) is a local martingale.
- \(A_t\) is an adapted process of finite variation on compact intervals.
Semimartingales form a vector space. BM and Poisson processes are SMs. All Lévy processes are SMs (by the Lévy-Itô decomposition). All local MGs are SMs (Protter). A deterministic process is a SM iff it is of finite variation (JS), so all infinite variation deterministic processes are not SMs.
1.4.2 Components of the Definition:
Local Martingale \(M_t\): This component of a semimartingale captures the ‘martingale-like’ properties but does not necessarily have to satisfy the integrability condition that a true martingale would require. It allows the inclusion of processes that behave like martingales over short intervals or under localized conditions.
Adapted Process of Finite Variation \(A_t\): This is a process where the paths are right-continuous with left limits and have finite variation over every finite interval. The finite variation part means that the total variation of the process over any finite interval is finite. It includes deterministic functions like linear functions or more general predictable processes with bounded variation.
1.4.3 Key Aspects and Properties:
Adaptedness: Both components, \(M_t\) and \(A_t\), must be adapted to the filtration \(\{\mathcal{F}_t\}\), meaning that for each \(t\), the values of \(M_t\) and \(A_t\) are based on information up to time \(t\).
Decomposition: The decomposition into a local martingale and a finite variation process is not necessarily unique.
Generalization of Martingales and Local Martingales: Every martingale is a local martingale, and every local martingale is a semimartingale. The addition of the finite variation process allows for capturing non-martingale-like behaviors such as drifts or trends, which are important in applications like financial modeling.
1.4.4 Importance and Applications:
Semimartingales are the most general class of processes for which the tools of stochastic calculus, particularly stochastic integration, are available. They encompass a wide range of processes used in financial mathematics for modeling asset prices and in engineering for signal processing among other fields. This generality makes them fundamental in modern probability theory and its applications.
1.5 Lévy Process
A Lévy process is a stochastic process with stationary, independent increments that starts at zero. These processes are characterized by jumps, and they are continuous in probability (the probability that the increment is zero as the increment period approaches zero is one). The process \(\{X_t\}_{t \geq 0}\) is a Lévy process if it satisfies:
Independence: The increments \(X_t - X_s\) are independent for \(s < t\).
Stationary: The distribution of \(X_t - X_s\) depends only on \(t-s\), not on the values of \(s\) or \(t\).
Stochastic Continuity: For every \(\epsilon > 0\) and for every \(s\),
\[ \lim_{t \to s} \P(|X_t - X_s| > \epsilon) = 0 \]
Each of these processes provides a distinct approach to modeling different types of random behavior observed in various scientific and engineering applications.
2 Results (JS)
Theorem 1 Let \(X\) be a supermartingale such that there exists an integrable random variable \(Y\) with \(X_t\ge \E[Y\mid \F_t]\) for all \(t\). Then
- (Doob’s Limit Theorem) \(X_t\) converges as to a finite limit \(X_\infty\).
- (Doob’s Stopping Theorem) If \(S\) and \(T\) are two stopping times, the random variables \(X_S\) and \(X_T\) are integrable, and \(X_S\ge \E[X_T\mid \F_S]\) on \(\{S\le T\}\). In particular, \(X^T\) is again a supermartingale.
For example, this applies if \(X\) is a non-negative (super)martingale.
Theorem 2 (Uniform and square integrable martingales converge)
- A uniformly integrable martingale converges as and in \(L^1\) to a terminal value and \(X_T = \E[X_\infty\mid\F]\) for all stopping times \(T\). Moreover, \(X\) is square integrable iff \(X_\infty\) is square-integrable, in which case convergence also occurs in \(L^2\)
- If \(Y\) is integrable, there exists a UI martingale \(X\), and only one up to an evanescent set, such that \(X_t=\E[Y\mid\F_t]\) for all real \(t\); moreover \(X_\infty=\E[Y\mid\F_{\infty^-}]\).
Theorem 3 (Doob’s inequality) If \(X\) is a square-integrable martingale then \[ \E\left( \sup_{t\in\R_+}X_t^2 \right) \le 4 \sup_{t\in\R_+} \E[X_t^2] = 4\E[X_\infty^2]. \]
Theorem 4 \(X\) càdlàg process with terminal value \(X_\infty\). Then \(X\) is UI iff for each stopping time \(T\), \(X_T\) is integrable and satisfies \(\E[X_T] = \E[X_0]\).
2.1 Martingales as a Difference Sequence
It is helpful to think of \(X_t\) in a uniformly integrable (UI) martingale that converges almost surely and in \(L^1\) to a limit \(X\) as a partial sum of the differences \(E[X \mid \mathcal{F}_t] - E[X \mid \mathcal{F}_{t-1}]\). This perspective is rooted in the martingale difference sequence and provides a valuable intuition about the structure and behavior of such martingales. Here’s how this works and why it makes sense.
Martingale Property: By definition, a martingale \((X_t)\) satisfies \(E[X_t \mid \mathcal{F}_{t-1}] = X_{t-1}\), where \(\mathcal{F}_t\) is the filtration up to time \(t\).
Decomposition: Given a martingale that converges to some limit \(X\), we can express \(X\) as: \[ X = \lim_{t \to \infty} X_t = X_0 + \sum_{k=1}^\infty (X_k - X_{k-1}) \] where each term \(X_k - X_{k-1}\) is \(\mathcal{F}_k\)-measurable and has zero expectation given \(\mathcal{F}_{k-1}\) (martingale difference).
Expectation as Conditional Expectation: Since \(X_t\) converges almost surely and in \(L^1\) to \(X\), we can write: \[ X_t = E[X \mid \mathcal{F}_t] \] Then, the increments of the martingale \(X_t\) can be viewed as: \[ X_t - X_{t-1} = E[X \mid \mathcal{F}_t] - E[X \mid \mathcal{F}_{t-1}] \] These increments are the martingale differences, representing the “new information” about the terminal variable \(X\) that becomes available at time \(t\).
Interpretation:
- This view aligns with the intuitive understanding of \(X_t\) as a kind of “partial sum” or accumulation of updates or corrections toward the final value \(X\), as revealed progressively through the filtration \(\mathcal{F}_t\).
- The sequence \((X_t)\) is effectively “learning” more about the outcome \(X\) as time progresses, with each step \(X_t - X_{t-1}\) adjusting the previous estimate based on the new information contained in \(\mathcal{F}_t\) relative to \(\mathcal{F}_{t-1}\).
In conclusion, viewing \(X_t\) as the partial sum of the differences \(E[X \mid \mathcal{F}_t] - E[X \mid \mathcal{F}_{t-1}]\) in a UI martingale that converges to \(X\) both almost surely and in \(L^1\) is very appropriate. This perspective helps in understanding the structure of martingales and the role of filtration in revealing information about the limit variable. It emphasizes how martingales integrate incremental updates into a coherent estimation or tracking process of the underlying random variable \(X\). This is a powerful conceptual tool in stochastic processes, especially in financial mathematics, signal processing, and other fields dealing with sequential information updating.
Introduction | Probability background | • | General examples | Incurred loss processes