Conditional Probability: Background Material
\[ \def\E{\mathsf E} \def\F{\mathscr F} \def\G{\mathscr G} \def\A{\mathscr A} \def\EE{\mathscr E} \def\S{\mathscr S} \def\B{\mathcal B} \def\P{\mathsf P} \def\Q{\mathsf Q} \def\R{\mathbb R} \def\px{\phantom{X}} \]
Kolmogorov Axiomatic Treatment vs. Intuitive Conditional Probability
Conditional Probability as Decomposition
In David Pollard’s work, particularly in the context of probability and measure theory, conditional probability is often framed as a decomposition of measures, offering a sophisticated mathematical perspective that extends beyond the basic formula \(P(A|B) = \frac{P(A \cap B)}{P(B)}\) for events with positive probability.
Decomposition of Measures:
Pollard’s approach to conditional probability involves the concept of a regular conditional probability, which is a function that provides probabilities conditional on an event or a sigma-algebra. This can be seen as decomposing the original probability measure into a family of probability measures, each corresponding to the conditional probability given a specific event or outcome.
Regular Conditional Probability: Given a probability space \((\Omega, \mathcal{F}, P)\) and a sub-sigma-algebra \(\mathcal{G} \subseteq \mathcal{F}\), a regular conditional probability is a function \(P(A | \mathcal{G})\) that gives the probability of \(A\) given \(\mathcal{G}\) for each \(A \in \mathcal{F}\). This function satisfies two main properties:
- For a fixed set \(A\), the function \(P(A | \mathcal{G})(\omega)\) is \(\mathcal{G}\)-measurable.
- For each fixed \(\omega\), the function \(P(\cdot | \mathcal{G})(\omega)\) is a probability measure on \((\Omega, \mathcal{F})\).
Disintegration: This approach can be thought of as a disintegration of the measure \(P\) into a collection of conditional measures given the information in \(\mathcal{G}\). It’s like breaking down the overall probability measure into component parts based on the information encoded in \(\mathcal{G}\).
Application in Analysis: This decomposition is fundamental in advanced probability and analysis, particularly in stochastic processes, where understanding the evolution of a process requires conditioning on past behavior or information.
Generalization: This framework generalizes the simple case of conditional probability for events. Instead of conditioning on a single event, you’re conditioning on the information available in a sigma-algebra, which can represent a much richer set of information, such as the history of a stochastic process up to a certain time.
Pollard’s measure-theoretic approach to conditional probability offers a robust and comprehensive framework that is essential for dealing with complex probabilistic models, especially where the information evolves or accumulates over time. This perspective is crucial in fields like statistical theory, stochastic processes, and machine learning, where one often needs to condition on an evolving body of information.
Monty Hall Problem
The Monty Hall problem is a well-known probability puzzle that demonstrates counterintuitive results in conditional probability. In this problem, a contestant is presented with three doors: behind one door is a car, and behind the other two are goats. After the contestant chooses a door, the host, who knows what’s behind each door, opens one of the remaining two doors to reveal a goat. The contestant is then given the option to stick with their original choice or switch to the other unopened door.
The counterintuitive solution is that the contestant should always switch doors. The probability of winning by switching is 2/3, while the probability of winning by staying with the original choice is 1/3. This problem highlights the importance of updating probabilities based on new information and challenges our intuitions about probability and decision-making.
The Two Envelopes Problem
The Two Envelopes problem is another thought-provoking scenario in the context of conditional probability. In this problem, two envelopes contain money, one with twice as much as the other. A person chooses one envelope at random and then, after seeing the amount inside but not knowing if it’s the larger or smaller amount, must decide whether to keep it or switch to the other envelope.
Intuitively, one might think there’s no advantage to switching, as the other envelope could contain either half or double the amount, seemingly offering a 50-50 chance. However, a more detailed analysis reveals a paradox where it appears that switching envelopes always increases the expected value, suggesting that a player should always switch. This paradox highlights challenges in applying conditional probability and expectation in scenarios where the problem’s framing affects our perception of the optimal strategy.