Brownian and Poisson Filtrations

notes
llm
mathematics
probability
Predictable vs surprise stopping times and concepts of Brownian and Poisson filtrations
Author

Stephen J. Mildenhall

Published

2026-02-24

Modified

2026-02-24

This note is a summary of some important distinctions in stochastic calculus between predictable vs surprise stopping times and the related concepts of Brownian and Poisson filtrations.

The key lesson is that path shape alone does not determine surprise. The filtration does.

The basic setup

We work on a filtered probability space \[ (\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0},\mathsf{P}). \] A filtration is an increasing family of sigma algebras: \[ \mathcal{F}_s\subseteq \mathcal{F}_t \quad\text{for } s\le t. \] Interpretation: \(\mathcal{F}_t\) is the information available by time \(t\).

Stopping times, predictable times, and surprise times

Stopping time

A random time \(\tau:\Omega\to[0,\infty]\) is a stopping time if \[ \{\tau\le t\}\in\mathcal{F}_t \quad\text{for all } t\ge 0. \] Interpretation: by time \(t\), we can tell whether the event has happened. “You know it when it happens.”

Predictable stopping time

A stopping time \(\tau\) is predictable if there exists an increasing sequence of stopping times \((\tau_n)\) such that \[ \tau_n<\tau \text{ on }\{\tau>0\},\qquad \tau_n\uparrow \tau. \] Interpretation: \(\tau\) can be announced from below. This is “not a surprise.”

Totally inaccessible stopping time

A stopping time is totally inaccessible if it cannot be announced in that way (more precisely, it avoids predictable stopping times). Interpretation: it is a genuine surprise time.

Brownian vs Poisson: the central contrast

Brownian filtration (usual augmented filtration)

If \(B\) is Brownian motion and \((\mathcal{F}_t)\) is its usual augmented filtration, then every stopping time is predictable. So Brownian stopping times are not surprises. This is a theorem about the filtration, not just about path continuity.

Poisson filtration

If \(N\) is a Poisson process, its jump times are stopping times but are totally inaccessible. So Poisson jump times are genuine surprises.

The standard examples

Brownian hitting time

Let \[ \tau := \inf\{t\ge 0:B_t=1\}. \] This is a stopping time because \[ \{\tau\le t\}=\left\{\sup_{0\le s\le t}B_s\ge 1\right\}\in\mathcal{F}_t. \] It is predictable because we can announce it by hitting lower levels: \[ \tau_n := \inf\{t\ge 0:B_t=1-1/n\}. \] Then \[ \tau_n<\tau,\qquad \tau_n\uparrow\tau. \] The reason is path continuity: to hit 1, Brownian motion must pass through \(1-1/n\) first.

Poisson first jump time

Let \[ T_1 := \inf\{t\ge 0:N_t=1\}. \] This is a stopping time because \[ \{T_1\le t\}=\{N_t\ge 1\}\in\mathcal{F}_t. \] But it is totally inaccessible: before \(T_1\), the process is flat at 0, and there is no gradual approach to the jump. The jump happens “all at once.”

Raw filtration, completion, and right-continuity

For a process \(X\), the raw natural filtration is \[ \mathcal{F}_t^0:=\sigma(X_s:0\le s\le t). \] In practice, we replace this by the usual augmentation.

Completion (adding null sets)

A sigma algebra may fail to contain subsets of null sets. Completion adds all subsets of null sets. This matters because probability theory constantly works up to almost sure equality, and completion removes null-set measurability pathologies.

Right-continuity

A filtration is right-continuous if \[ \mathcal{F}_t=\mathcal{F}_{t+}:=\bigcap_{u>t}\mathcal{F}_u. \] This says no new information appears by waiting an infinitesimal deterministic amount of time.

Usual augmentation

Starting from the raw natural filtration, the usual augmentation means:

  1. complete it,
  2. take the right-continuous hull.

This gives the “usual conditions” (complete + right-continuous), which are the standard assumptions for many theorems (Doob-Meyer, optional stopping machinery, compensators, etc.).

Subtlety: deterministic times vs random times

Even in a Poisson filtration, one can have nice deterministic-time behavior after augmentation (right-continuity, no fixed-time jumps a.s.). The real distinction appears at random times. That is why the important comparison is not just \(\mathcal{F}_{t-}\) vs \(\mathcal{F}_t\) for fixed \(t\), but \(\mathcal{F}_{\tau-}\) vs \(\mathcal{F}_\tau\) for stopping times \(\tau\).

Doob-Meyer decomposition and compensators

For an adapted increasing process \(X\), Doob-Meyer says (under standard conditions) we can write \[ X_t=A_t+M_t, \] where:

  • \(A\) is predictable and increasing (the compensator),
  • \(M\) is a martingale.

Important point: this is not a split into continuous part and jump part.

It is a split into:

  • predictable accumulation,
  • unpredictable innovation.

A jump can live entirely in the compensator if the jump time is predictable.

The one-jump process

Given a stopping time \(\tau\), define \[ H_t:=1_{\{\tau\le t\}}. \] Pathwise, this always looks the same:

  • 0 before \(\tau\),
  • one jump to 1 at \(\tau\),
  • then constant.

But the Doob-Meyer decomposition depends on the filtration.

Brownian hitting time case

For \[ \tau=\inf\{t\ge 0:B_t=1\}, \] the time \(\tau\) is predictable. In this case, the compensator is the whole process: \[ A_t=H_t,\qquad M_t\equiv 0. \] This means the jump is completely predictable (no martingale surprise part). This feels strange at first, but it is the key idea: a jump is not the same thing as a surprise.

Poisson first jump case

For \[ T_1=\inf\{t\ge 0:N_t=1\}, \qquad H_t=1_{\{T_1\le t\}}, \] the compensator is \[ A_t=\lambda(t\wedge T_1) =\lambda\int_0^t 1_{\{s<T_1\}}\,ds. \] So \[ M_t=H_t-A_t \] is a martingale.

Here:

  • \(A\) is continuous,
  • the jump of size 1 stays in \(M\),
  • so the jump is pure surprise (totally inaccessible).

At the jump time, \[ \Delta A_{T_1}=0,\qquad \Delta H_{T_1}=1,\qquad \Delta M_{T_1}=1. \] This is the signature of a surprise time.

Why the Poisson process compensator is unbounded

For a Poisson process \(N_t\) of rate \(\lambda\), \[ N_t-\lambda t \] is a martingale.

So the compensator of \(N\) is \[ A_t=\lambda t. \] This is unbounded, which can feel odd at first, but it is exactly right: it is cumulative predictable expected arrivals.

  • \(N_t\) counts total jumps up to time \(t\),
  • \(\lambda t\) is the predictable expected count by time \(t\),
  • \(N_t-\lambda t\) is the fluctuation around that trend.

One-jump compensators sum across jumps. If \(T_n\) is the \(n\)th jump time of a Poisson process, then \[ N_t=\sum_{n\ge 1}1_{\{T_n\le t\}}. \]

Each one-jump piece has its own compensator (active only between successive jumps), and these add up. This is why the full compensator is the straight line \(\lambda t\). Intuitively: the Poisson process runs a constant-intensity clock at rate \(\lambda\), and the clocks for successive jumps concatenate.

The proof that Brownian stopping times are all predictable

A very useful proof idea runs through martingales.

Step 1: Brownian filtration has continuous martingales

In the usual Brownian filtration, martingales are continuous (via martingale representation, heuristically because they are Ito integrals against continuous Brownian motion). This is the process extension of Doob’s factorization lemma - see below.

Step 2: A surprise stopping time would force a jump martingale

For a stopping time \(\tau\), the process \[ H_t=1_{\{\tau\le t\}} \] has a Doob-Meyer decomposition \[ H=A+M. \] If \(\tau\) were totally inaccessible, its compensator would be continuous, so the jump of \(H\) would appear in the martingale part: \[ \Delta M_\tau=1. \]

Step 3: Contradiction

But Brownian filtrations do not support jump martingales (representation theorem). So totally inaccessible stopping times cannot exist. Therefore all stopping times are predictable.

Factorization lemma and martingale representation: the connection

Factorization lemma (static)

If \(Y\) is measurable with respect to \(\sigma(X)\), then \[ Y=f(X) \] for some measurable \(f\) (under standard assumptions).

So if a random variable is measurable with respect to Brownian information up to time \(t\), it is a measurable functional of the Brownian path up to time \(t\).

Martingale representation (dynamic)

If \(M\) is a Brownian-filtration martingale, then \[ M_t=M_0+\int_0^t H_s\,dB_s \] for some predictable \(H\).

This is stronger than factorization. It says not only that the martingale depends on Brownian information, but exactly how its dynamics are built from Brownian noise.

This is what drives the “all Brownian martingales are continuous” fact.

Cardinality of Borel sigma algebra and completion

A nice set-theoretic point:

  • the Borel sigma algebra on \(\mathbb{R}\) has cardinality \(\mathfrak{c}\) (continuum),
  • but completing it (e.g. to Lebesgue measurable sets) can blow it up to cardinality \(2^{\mathfrak{c}}\).

Reason: a null Borel set like the Cantor set has cardinality \(\mathfrak{c}\), and completion adds all its subsets, of which there are \(2^{\mathfrak{c}}\). So completion is “small” probabilistically (it only adds null-set subsets) but huge set-theoretically.

Quotienting by null sets

We can “mod out by null sets” just like quotienting in algebra.

Random variables modulo a.s. equality

Define \[ X\sim Y \quad\text{iff}\quad \mathsf{P}(X=Y)=1. \] Then \(L^0\) and \(L^p\) spaces are quotient spaces of measurable functions modulo a.s. equality. This is why the \(L^p\) norm becomes a true norm (instead of a seminorm): we quotient out the kernel.

Events modulo null symmetric difference

Define \[ A\sim B \quad\text{iff}\quad \mathsf{P}(A\triangle B)=0. \] The quotient \(\mathcal{F}/\mathcal{N}\) is the measure algebra (a Boolean algebra modulo the sigma-ideal of null sets). This is exactly the algebraic quotient picture.

Completion vs quotient

These are related but opposite in direction:

  • completion enlarges the sigma algebra (adds all subsets of null sets),
  • quotient collapses distinctions (identifies objects differing only on null sets).

Both formalize the idea that null sets do not matter.

How distinguish filtration types

From the sets alone, one generally cannot tell “this is Brownian” or “this is Poisson.” You need the probability measure too, and usually martingale behavior. The right intrinsic tests are:

  1. martingale test
    • if every local martingale is continuous, the filtration is Brownian-type (continuous filtration),
    • if there exists a jump martingale, the filtration supports jump information.
  2. stopping-time test
    • if all stopping times are predictable, the filtration is continuous-type,
    • if totally inaccessible stopping times exist, the filtration supports surprise jumps.
  3. random-time information test
    • compare \(\mathcal{F}_{\tau-}\) and \(\mathcal{F}_\tau\),
    • genuine new information at random times indicates jump-type behavior.

Brownian filtration is the continuous prototype. Poisson filtration is the totally-inaccessible-jump prototype.

Conclusion

Two one-jump processes can have exactly the same path shape, but completely different compensators. Why? Because compensators measure predictable information flow, not path shape. This is the big idea:

a jump is not the same thing as a surprise.

Brownian hitting-time jumps can be predictable. Poisson jump-time jumps are surprises. The filtration tells you which is which.