> [!quote]
>
> Gain in information about an event leads to a change in its probability.
> [!definition]
>
> Let $(\Omega, \cf, \bp)$ be a [[Probability|probability]] space, $\mathcal{G} \subset \cf$ be a [$\sigma$-field](Sigma%20Algebra), and $E \in \cf$ be an event. The **conditional probability** of the event $E$ with respect to $\mathcal{G}$ is the **conditional expectation**
> $
> \ev(\one_E|\mathcal{G}) = \ev(E|\mathcal{G})
> $
> which is $\mathcal{G}$-measurable and satisfies
> $
> \bp(F \cap E) = \int_F\ev(E|\mathcal{G}) \quad \forall F \in \mathcal{G}
> $
> [!definition]
>
> Suppose that there exists a value for a [[Probability|probability]] of an [[Event|event]] $A$, and that **[[Information|information]]** is obtained where another event $B$ has occurred. In light of the knowledge of $B$, the probability of $A$ changes into $P_{B}(A)$, known as the **conditional probability of A given B**.
>
> Let $(\Omega, \cm, P)$ be a probability space and let $A, B \in \cm$ be events where $P(B) \ne 0$. Then the **conditional probability** of $A$ given $B$ is defined by
> $
> P_B(A) = P(A|B) = \frac{P(A \cap B)}{P(B)}
> $
> [!theorem]
>
> - Let $A_1$ and $A_2$ be disjoint subsets in $\Sigma(S)$, then $P_B(A_1 \cup A_2) = P_B(A_1) + P_B(A_2)$.
> - If $A \in \Sigma(S)$, then $P_B(A^\prime) = 1 - P_B(A)$.
> [!theorem]
>
> Let $\mathcal{P} = \{E_1, E_2, \cdots, E_n\}$ be a [[Partition|partition]] of the [[Sample Space|sample space]] $S$ with each $P(E_i) \ne 0$, and let $A \in \mathcal{P}(S)$, then
> $
> P(A) = \sum_{i = 1}^{n}P_{E_i}(A)P(E_i)
> $
> [!theorem]
>
> Let $\mathcal{P} = \{E_1, E_2, \cdots, E_n\}$ be a [[Partition|partition]] of the [[Sample Space|sample space]] $S$ with each $P(E_i) \ne 0$, $A \in \mathcal{P}(S)$, and $H$ be a [[Hypothesis|hypothesis]] with $H \in \mathcal{P}(S)$ and $P(H) \ne 0$, then
> $
> P_H(A) = \sum_{i = 1}^{n}P_{H \cap E_i}(A)P_H(E_i)
> $
> [!theorem]
>
> Let $X$ and $Y$ be discrete random variables, $\seq{x_i}$ and $\seq{y_i}$ such that $\sum_{i \in \nat}P(X = x_i) = \sum_{i \in \nat}P(Y = y_i) = 1$. Then for any $g: \real^2 \to \real$,
> $
> P(g(X, Y) = z) = \sum_{i \in \nat}P(g(x_i, Y)|X = x_i)P(X = x_i)
> $
> the event in the [[Conditional Probability|conditional probability]] may be refined by the condition.
>
> *Proof*. Since
> $
> \bracs{g(X, Y) = z} \cap \bracs{X = x_i} = \bracs{g(x_i, Y) = z} \cap \bracs{X = x_i}
> $
> we have
> $
> \begin{align*}
> P(g(X, Y) = z|X = x_i) &= \frac{P(\bracs{g(X, Y) = z} \cap \bracs{X = x_i})}{P(X = x_i)} \\
> &= \frac{P(\bracs{g(x_i, Y) = z} \cap \bracs{X = x_i})}{P(X = x_i)} \\
> &= P(g(x_i, Y) = z|X = x_i)
> \end{align*}
> $
> and the result follows from the previous theorem,