Consider the following system of second-order equations
where are functions of variable . The matrix is constant. Despite the linear-ish appearance, the system is nonlinear due to the presence of sines. We are unlikely to find the solution in analytic form; even this simplified problem leaves WolframAlpha stumped. So if you want to find a solution, you’ll have to do it numerically. But the theory of ODEs may still give us a qualitative description of solutions.
The first thing to recall is that all systems of ODE can be made first-order systems. We can simply introduce new unknown functions , , and rewrite the system as 6 equations with 6 unknown functions:
Think of these equations as describing the motion of a “particle” in 6-dimensional space . The system prescribes the velocity of the particle at every point: we visualize this by imagining a tiny arrow placed at every point of the space. (We are lucky that the system is autonomous: the time does not appear explicitly. Otherwise we’d have to imagine that the arrows change their length and magnitude as time goes on. As it is, they are fixed in place.)
Here is a two-dimensional illustration: it corresponds to the system , , which comes from the second-order ODE .
We can notice three different zones in this graph:
- If is large and positive, the point moves to the right with only a bit of wobbling. In terms of , this means that if is large to begin with, the solution will keep on increasing indefinitely.
- If is large and negative, the point moves to the left: so, will keep on decreasing indefinitely.
- If is small, oscillatory pattern emerges. It’s only in this zone that the sine term has substantial effect on the solutions.
Similar conclusions can be made about the original 6-dimensional system. One can further identify the equilibrium points, the type of each equilibrium (from the plot they look like saddles and centers), and the local stability of equilibria. But let’s first settle a more basic question; do we have uniqueness of solutions?
The precise form of this question is: given a initial value , is there a unique function that satisfies both the equation (*) and the initial condition? Since it’s tiresome to keep typing these six letters xyzuvw, let’s switch to vector notation: writing
we bring the initial-value problem into a concise form
Here is a six-dimensional vector field that captures the right-hand side of the system:
The basic uniqueness theorem says that the system (**) has a unique solution provided that the field satisfies the Lipschitz condition: namely, there exists a constant such that for all vectors and in . The following fact helps:
If a function has bounded partial derivatives of first order, then it is Lipschitz
There are six variables, so we must look at six partial derivatives of each of six components of . One can organize them into a 6-by-6 matrix of partial derivatives, the Jacobian matrix. Here it is:
Everything here is bounded, thanks to the cosine being bounded. Thus, the solution of (**) exists and is unique.
Oddly, I had a bit of difficulty locating a reference for the uniqueness theorem cited above. The basic ODE books like Boyce&DiPrima do not have it: they discuss uniqueness only for the scalar equation. The ODE book by Hartman obscures this topic with generalizations. Back in the days I learned this stuff from the 2-volume Analyse mathématique by Laurent Schwartz (in Russian translation), but this was a peculiarity of my education.
I hear good things about Theory of Ordinary Differential Equations by Coddington and Levinson, but don’t have it on my shelf. So, here is the best reference I can give now:
Basic Theory of Ordinary Differential Equations by Po-Fang Hsieh and Yasutaka Sibuya (at Google books), Theorem I-1-4 on page 3.
There are more general uniqueness theorems, but in our example is so nice that they were not needed.