> [!definition] > > Let $X = \seq{X_n}$ be a [[Sequence|sequence]] of [[Random Variable|random variables]] over a [[Probability|probability space]] $(\Omega, \cf, \bp)$. For any $M \subset \nat$, define > $ > \mathcal{T}_M = \mathcal{T}_M(X) = \sigma(X_m, m \in N \setminus M) > $ > to be the [[Sigma Algebra|sigma algebra]] generated by *ignoring* the elements in $m$. Then the **tail $\sigma$-field** > $ > \mathcal T(X) = \bigcap_{M \subset N: \abs{M} < \infty}\mathcal T_M > $ > is the $\sigma$-field obtained by ignoring *any* finite set of random variables. > [!theoremb] Theorem > > Let $X = \seq{X_n}$ be a sequence of [[Probabilistic Independence|mutually independent]] random variables on a common probability space $(\Omega, \cf, \bp)$. Then $\bp\bracs{E} \in \bracs{0, 1}$ for all $E \in \mathcal T(X)$. > > *Proof*. Let $E \in \mathcal T$ and $n \in \nat$. For any $F \in \sigma(X_n)$, since $\mathcal T \subset \sigma(X_m, m \in N \setminus \bracs{n})$, the events $E$ and $F$ would be independent. In which case, $E$ is independent of all events in $\bigcup_{n \in \nat}\sigma(X_n)$. > $ > \mathcal G = \sigma(X_n, n \in \nat) = \sigma\paren{\bigcup_{n \in \nat}\sigma(X_n)} > $ > This means that $E$ is independent from every event in $\mathcal G$. As $\mathcal T \subset \mathcal G$, > $ > \bp\bracs{E \cap E} = \bp\bracs{E}^2 > $ > which is only true when $\bp\bracs{E} \in \bracs{0, 1}$.