> [!definition]
>
> Let $X$ be a random variable and $A = \bracs{ \cdot }$ be an event. Define
> $
> \ev(X, A) = \ev(X, \cdot) = \int_{A}X
> $
> [!theorem]
>
> Let $(\Omega, \cf, \bp)$ be a [[Probability|probability]] space, and $\seq{X_n} \subset L^2$ be a family of [[Probabilistic Independence|independent]] [[Random Variable|random variables]] with $\ev(X_n) = 0$ for all $n \in \nat$. For each $n \in \nat$, let
> - $S_n = \sum_{k \le n}X_k$ be the $n$-th partial sum.
> - $\sigma_n = \sqrt{\ev(X_n^2)}$ be the [[Standard Deviation|standard deviation]].
> - $\Sigma_n = \sqrt{\ev(S_n^2)} = \sqrt{\sum_{k \le n}\ev(X_n^2)} = \sqrt{\sum_{k \le n}\sigma_n^2}$ be the standard deviation of the $n$-th partial sum.
> - $\Sigma_n^2 = \ev(S_n^2) = \sum_{k \le n}\sigma_n^2$ be the variance of the $n$-th partial sum.
> - $\check S_n = S_n/\Sigma_n$ be the normalised $n$-th partial sum, such that $\text{Var}({\check S_n}) = 1$.
>
> and for any $\varepsilon > 0$, define
> $
> \begin{align*}
> g_n(\varepsilon) &= \frac{1}{\Sigma_n^2}\sum_{k \le n}\ev\braks{X_j^2, \abs{X_j} > \varepsilon \Sigma_n} \\
> &= \sum_{k \le n}\ev\braks{\paren{\frac{X_k}{\Sigma_n}}^2, \frac{\abs{X_k}}{\Sigma_n} > \varepsilon}
> \end{align*}
> $
> to be the sum of all contributions from each random variable to the deviations.
>
> Let $\varphi \in C^3(\real)$[^1] such that $\norm{D^2\varphi}_u < \infty$ and $\norm{D^3\varphi}_u < \infty$. Then, for any $\varepsilon > 0$,
> $
> \abs{\ev\braks{\varphi(\check S_n) - \gamma_{0, 1}(\varphi)}} \le \frac{1}{2}\paren{\varepsilon + \sqrt{g_n(\varepsilon)}} \cdot \norm{D^3\varphi}_u + g_n(\varepsilon)\norm{D^2\varphi}_u
> $
> In particular, if for any fixed $\varepsilon > 0$, $g_n(\varepsilon) \to 0$ (**Lindenberg's condition**), then
> $
> \ev\braks{\varphi(\check{S}_n)} = \gamma_{0, 1}(\varphi) = \frac{1}{\sqrt{2\pi}}\int \varphi(x)e^{-x^2/2}dx
> $
>
> ### Insight
>
> As $n \to \infty$, for each fixed $j \in \nat$, the role of $X_j$ becomes close to a Gaussian distribution with $\sigma^2 = \sigma_j^2$.
>
> ### Mimicking Sequence
>
> Let $\seq{Z_n}$ be a family of independent, identically distributed standard Gaussian random variables, that is independent from $\seq{X_n}$. Let $Y_n = \sigma_n Z_n$, then $Y_n$ is a Gaussian random variable with the same second moment as $X_n$. Let
> $
> T_n = \sum_{k \le n}X_j \quad \check T_n = T_n/\Sigma_n \sim N(0, 1)
> $
>
> ### Telescope and Taylor's Formula
>
> Swap the original sequence with the mimicking sequence one by one. Let
> $
> U_j = \frac{1}{\Sigma_n}\braks{X_1 + \cdots + X_{j - 1} + Y_{j + 1} + \cdots + Y_n}
> $
> then we can write
> $
> \begin{align*}
> \varphi(\check S_n) - \varphi(\check T_n) &= \varphi\paren{U_n + \frac{X_n}{\Sigma_n}} - \varphi\paren{U_1 + \frac{Y_1}{\Sigma_n}} \\
> &= \sum_{j \in [n]}\braks{\varphi\paren{U_j + \frac{X_j}{\Sigma_n}} - \varphi\paren{U_j + \frac{T_j}{\Sigma_n}}}
> \end{align*}
> $
> Since $\varphi \in C^3(\real)$, for any $h \in \real$, by [[Taylor's Formula]],
> $
> \varphi(U_j + h) = \varphi(U_j) + h \cdot \varphi'(U_j) + \frac{1}{2}h^2 \cdot \varphi''(U_j) + R_j(h)
> $
> where
> $
> R_j(h) \le \min\braks{\frac{1}{6}\abs{h^3} \cdot \norm{\varphi'''}_u, \abs{h^2} \cdot \norm{\varphi''}_u}
> $
>
> ### Bounds
>
> Let $W$ be a random variable independent from $\bracs{X_i}_1^{j - 1}$ and $\bracs{X_i}_{j+1}^n$ with $\ev(W) = 0$ and $\ev(W^2) = \sigma^2$, then by independence,
> $
> \begin{align*}
> \ev\braks{\varphi(U_j + W)} &= \ev[\varphi(U_j)] + \ev(W \cdot \varphi'(U_j)) \\
> &+ \frac{1}{2}\ev(W^2 \cdot \varphi''(U_j)) + \ev(R_j(W)) \\
> &= \ev[\varphi(U_j)] + \ev(W) \cdot \ev[\varphi'(U_j)] \\
> &+ \frac{1}{2}\ev(W^2)\ev[\varphi''(U_j)] + \ev(R_j(W)) \\
> &= \ev[\varphi(U_j)] + \frac{\sigma^2}{2}\ev[\varphi''(U_j)] + \ev(R_j(W))
> \end{align*}
> $
> Luckily, $\ev[(Y_j/\Sigma_n)^2] = \ev[(X_j/\Sigma_n)^2] = \sigma_j^2/\Sigma_n^2$. Therefore
> $
> \ev\braks{\varphi\paren{U_j + \frac{X_j}{\Sigma_n}}} - \ev\braks{\varphi\paren{U_j + \frac{Y_j}{\Sigma_n}}}
> $
> is bounded by
> $
> \ev\braks{\abs{R_j\paren{\frac{X_j}{\Sigma_n}}}} + \ev\braks{\abs{R_j\paren{\frac{Y_j}{\Sigma_n}}}}
> $
>
> ### Bounding the First Remainder
>
> Let $\varepsilon > 0$, then the remainder may be split as
> $
> \begin{align*}
> \ev\braks{\abs{R_j\paren{\frac{X_j}{\Sigma_n}}}} &= \ev\braks{\abs{R_j\paren{\frac{X_j}{\Sigma_n}}}, \frac{\abs{X_j}}{\Sigma_n} \le \varepsilon} \\
> &+ \ev\braks{\abs{R_j\paren{\frac{X_j}{\Sigma_n}}}, \frac{\abs{X_j}}{\sum_n} > \varepsilon}
> \end{align*}
> $
> Applying the cubic remainder bound and factoring an $\varepsilon$ out on the first term yields
> $
> \begin{align*}
> \ev\braks{\abs{R_j\paren{\frac{X_j}{\Sigma_n}}}, \frac{\abs{X_j}}{\Sigma_n} \le \varepsilon} &\le \frac{1}{6}\norm{\varphi'''}_p \cdot \ev\braks{\abs{\frac{X_j}{\Sigma_n}}^3, \frac{\abs{X_j}}{\Sigma_n} \le \varepsilon} \\
> &\le \frac{\varepsilon}{6}\norm{\varphi'''}_p \cdot \ev\braks{\abs{\frac{X_j}{\Sigma_n}}^2, \frac{\abs{X_j}}{\Sigma_n} \le \varepsilon} \\
> &\le \frac{\varepsilon}{6}\norm{\varphi'''}_p \cdot \frac{\sigma_j^2}{\Sigma_n^2}
> \end{align*}
> $
> Applying the square remainder bound on the second term yields
> $
> \ev\braks{\abs{R_j\paren{\frac{X_j}{\Sigma_n}}}, \frac{\abs{X_j}}{\sum_n} > \varepsilon} \le \norm{\varphi''}_p \cdot \ev\braks{\abs{\frac{X_j}{\Sigma_n}}^2, \frac{\abs{X_j}}{\Sigma_n} > \varepsilon}
> $
> If we sum this over all $j$,
> $
> \begin{align*}
> \sum_{j \le n}\ev\braks{R_j\paren{\frac{X_j}{\Sigma_n}}} &\le \frac{\varepsilon}{6}\norm{\varphi'''}_u \cdot \sum_{j \le n}\frac{\sigma_j^2}{\Sigma_n^2} + \norm{\varphi''}_u \cdot g_n(\varepsilon)
> \end{align*}
> $
>
> ### Bounding the Second Remainder
>
> Since $Y_j$ is under our control as a Gaussian random variable and $\ev(\abs{Z_n}^3) \le 1/2$,
> $
> \begin{align*}
> \sum_{j \le n}\ev\braks{R_j\paren{\frac{Y_j}{\Sigma_n}}} &\le \frac{1}{6}\norm{\varphi'''}_u \sum_{j \le n}\braks{\ev\braks{\paren{\frac{Y_j}{\Sigma_n}}^3, \frac{\abs{Y_j}}{\Sigma_n}}} \\
> &\le \frac{1}{6}\norm{\varphi'''}_u\sum_{j \le n}\frac{\sigma_j^3}{\Sigma_n^3}\ev(\abs{Z_n}^3) \\
> &\le \frac{1}{3}\norm{\varphi'''}_u \cdot \max_{j \le n}\paren{\frac{\sigma_j}{\Sigma_n}}
> \end{align*}
> $
> where we can express
> $
> \begin{align*}
> \sigma_j^2 = \ev(X_j^2) &= \ev\braks{X_j^2, \frac{\abs{X_j}}{\Sigma_n} \le \varepsilon} + \ev\braks{X_j^2, \frac{\abs{X_j}}{\Sigma_n} > \varepsilon} \\
> &\le \varepsilon^2 \Sigma_n^2 + \Sigma_n^2\frac{1}{\Sigma_n^2}\ev\braks{X_j^2, \frac{\abs{X_j}}{\Sigma_n} > \varepsilon} \\
> &= \varepsilon^2 \Sigma_n^2 + \Sigma_n^2\ev\braks{\abs{\frac{X_j}{\Sigma_n}}^2, \frac{\abs{X_j}}{\Sigma_n} > \varepsilon} \\
> \frac{\sigma_j^2}{\Sigma_n^2} &\le \varepsilon^2 + \ev\braks{\abs{\frac{X_j}{\Sigma_n}}^2, \frac{\abs{X_j}}{\Sigma_n} > \varepsilon}
> \end{align*}
> $
> If $k$ is the index that maximises the value of $\sigma_j/\Sigma_n$, then
> $
> \begin{align*}
> \max_{j \le n}\paren{\frac{\sigma_j^2}{\Sigma_n^2}} &\le \varepsilon^2 + \ev\braks{\abs{\frac{X_k}{\Sigma_n}}^2, \frac{\abs{X_k}}{\Sigma_n} > \varepsilon} \\
> &\le \varepsilon^2 + \sum_{j \le n}\ev\braks{\abs{\frac{X_k}{\Sigma_n}}^2, \frac{\abs{X_k}}{\Sigma_n} > \varepsilon} \\
> \max_{j \le n}\paren{\frac{\sigma_j^2}{\Sigma_n^2}}&\le \varepsilon^2 + g_n(\varepsilon) \\
> \max_{j \le n}\paren{\frac{\sigma_j}{\Sigma_n}} &\le \sqrt{\varepsilon^2 + g_n(\varepsilon)} \le \varepsilon + \sqrt{g_n(\varepsilon)}
> \end{align*}
> $
> Hence
> $
> \sum_{j \le n}\ev\braks{R_j\paren{\frac{Y_j}{\Sigma_n}}} \le \frac{1}{3}\norm{\varphi'''}_u \cdot\paren{\varepsilon + \sqrt{g_n(\varepsilon)}}
> $
>
> ### Composing the Difference
>
> Combining the two bounds, we get that
> $
> \begin{align*}
> \ev\braks{\varphi(\check S_n) - \varphi(\check T_n)} &\le \ev\braks{\abs{R_j\paren{\frac{X_j}{\Sigma_n}}}} + \ev\braks{\abs{R_j\paren{\frac{Y_j}{\Sigma_n}}}} \\
> &\le \frac{\varepsilon}{6}\norm{\varphi'''}_u\sum_{j \le n}\frac{\sigma_j^2}{\Sigma_n^2}
> + \norm{\varphi''}_u \cdot g_n(\varepsilon) \\
> &+ \frac{1}{3}\norm{\varphi'''}_u \cdot \paren{\varepsilon + \sqrt{g_n(\varepsilon)}} \\
> &\le \frac{1}{2}\norm{\varphi'''}_u \cdot \paren{\varepsilon + \sqrt{g_n(\varepsilon)}} + \norm{\varphi''}_u \cdot g_n(\varepsilon)
> \end{align*}
> $
> [!theorem]
>
> Under the same assumptions of the above theorem. If $\varphi \in C_c^\infty(\real)$, then
> $
> \limv{n}\ev\braks{\varphi(\check S_n)} = \gamma_{0, 1}(\varphi)
> $
> In particular, this implies that for every $(a, b] \subset \real$,
> $
> \limv{n}\bp(a \le \check S_n \le b) = \gamma_{0, 1}((a, b])
> $
> *Proof*. Let $\seq{\varphi_n: \real \to [0, 1]}$ be a sequence of smooth bump functions with compact support with $\varphi_n|_{[a, b]} = 1$ such that $\varphi_n \to \one_{[a, b]}$ pointwise. Now
> $
> \limsup \bp\paren{\check S_n \in [a, b]} \le \limv{n}\gamma_{0, 1}(\varphi_n) = \gamma_{0, 1}(\one_{[0, 1]})
> $
> by the dominated convergence theorem. Similarly, by choosing another sequence of smooth bump functions with $\supp{\varphi_n} \upto [a, b]$ and $\varphi_n \to \one_{[a, b]}$ pointwise. Then
> $
> \liminf \bp\bracs{\check S_n \in [a, b]} \ge \lim_{n \to \infty} \gamma(\varphi_n) = \gamma_{0, 1}(\one_{[0, 1]})
> $
*Stein's method:* depending on additional constraints (such as $\sup_{n \in \nat}\norm{X_n}_3 < \infty$), for $x \in \real$, the rate of convergence can be constrained.
[^1]: This condition can be relaxed down to continuous with at most quadratic growth.