> [!theorem] > > Let $L$ be a second-order [[Ordinary Differential Equation|ODE]] > $ > y^{\prime\prime} + p_1y^\prime + p_2 y > $ > and let $y_1, y_2$ be the generators of the solutions of $L$. Then the Wronskian > $ > W = \det \begin{bmatrix} y_1 & y_2 \\ y_1^\prime & y_2^\prime > \end{bmatrix} > = y_1y_2^\prime - y_1^\prime y_2 > $ > We find that > $ > \begin{align*} > y_2(y_1^{\prime\prime} + p_1y_1^\prime + p_2y_1) - y_1(y_2^{\prime\prime} + p_1 y_2^\prime + p_2y_2) = 0 > \end{align*} > $ > which implies > $ > y_1 y^1{\prime\prime} - y_1y_2^{\prime\prime} + p_1y^\prime y_2 - y_1^\prime y_1 = 0 > $ > but > $ > W^\prime = y_1y_2^{\prime\prime} + \cancel{y_1^\prime y_2^\prime} - \cancel{y_1^\prime y_2^\prime} - y_1^{\prime\prime}y_2 > $ > and $W$ is a solution to the ODE $w^\prime + p_1w = 0$. > [!theorem] Abel's Identity Case 1 > > Now we consider the general Wronskian, and restrict $y_i$ to be elements of the solution space (now we can assume twice continuous and differentiable). > > Differentiating the determinant of a matrix can be done through using the product rule, row by row or column by column (given not proven). > $ > W^\prime = \sum_{k = 1}^{n}\det \paren{W \text{ with } i\text{-th row differentiated}} > $ > Conveniently, differentiating the $i$-th row gives back the $i+1$-th row, which creates duplicate rows. This means that all but the last matrix has a determinant of $0$. > $ > W^\prime = \det (W \text{ with last row differentiated}) > $ > However, $\frac{d^ny_j}{dx^n} = \sum_{k}^{n}-p_k\frac{d^{n-k}y_j}{dx^{n - k}}$ (from the differential equation). This allows us to collect the terms and rewrite the determinant as > $ > W^\prime = \sum_{k = 1}^{n}-p_k\det \begin{bmatrix} y_1 & \cdots &y_n \\ \vdots & \ddots &\vdots \\ > \frac{d^{n - k}y_1}{dx^{n - k}} &\cdots & \frac{d^{n - k}y_1}{dx^{n - k}}\end{bmatrix} > $ > We have created repeated rows, again, for all $k > 1$. So > $ > W^\prime = -p_1W > $ > and Abel's identity has been shown. > [!theorem] Abel's Identity Case 2 > > If $c \ne 0$ (where does the $c$ come from? I have no idea), then we have case 1, implying that $W \ne 0 \forall x \in I$. Otherwise, we move onto case $2$. It remains to show that the $y_n$s are linearly independent. Given a function $\phi = \sum_{k = 1}^{n}c_j y_j$ that solves the ODE, then > $ > \begin{align*} > \begin{bmatrix} \phi \cr \vdots \cr \frac{d^{n-1}\phi}{d\phi} \end{bmatrix} = > > \begin{bmatrix} y_1 & \cdots &y_n \\ \vdots & \ddots &\vdots \\ > \frac{d^{n - 1}y_1}{dx^{n - 1}} &\cdots & \frac{d^{n - 1}y_1}{dx^{n - 1}}\end{bmatrix} > \begin{bmatrix} c_1 \cr \vdots \cr c_n \end{bmatrix} > \end{align*} > $ > but the determinant of the matrix is $0$ (since we're in case 2). So the matrix isn't invertible and we have more than one pre-image of $0$. This means that the $y_j$s must be linearly dependent, and we have dealt with the other case. > [!theorem] > > If $L[y_j](x) = 0$ for all $x \in I, j \in [1, n]$. Then the following are equivalent: > - $\list{y}{n}$ form a fundamental set of solutions on $I$. > - $\list{y}{n}$ on $I$ is a linearly independent set. > - $W(y_1, \cdots, y_n)(x_0) \ne 0$ for some $x \in I$. > - $W(y_1, \cdots, y_n)(x) \ne 0 \forall x \in I$.