« PreviousContinue »
and nonlinear equations is clear. Any two solutions of a linear equation can be added together to form a new solution; this is the superposition principle. In fact, a moment of serious thought allows one to recognize that superposition is responsible for the systematic methods used to solve, independent of other complexities, essentially any linear problem. Fourier and Laplace transform methods, for example, depend on being able to superpose solutions. Putting it naively, one breaks the problem into many small pieces, then adds the separate solutions to get the solution to the whole problem.
In contrast, two solutions of a nonlinear equation cannot be added together to form another solution. Superposition fails. Thus, one must consider a nonlinear problem in toto; one cannot at least not obviously-break the problem into small subproblems and add their solutions. It is therefore perhaps not surprising that no general analytic approach exists for solving typical nonlinear equations. In fact, as we shall discuss, certain nonlinear equations describing chaotic physical motions have no useful analytic solutions.
Physically, the distinction between linear and nonlinear behavior is best abstracted from examples. For instance, when water flows through a pipe at low velocity, its motion is laminar and is characteristic of linear behavior: regular, predictable, and describable in simple analytic mathematical terms. However, when the velocity exceeds a critical value, the motion becomes turbulent, with localized eddies moving in a complicated, irregular, and erratic way that typifies nonlinear behavior. By reflecting on this and other examples, we can isolate at least three characteristics that distinguish linear and nonlinear physical phenomena.
First, the motion itself is qualitatively different. Linear systems typically show smooth, regular motion in space and time that can be described in terms of wellbehaved functions. Nonlinear systems, however, often show transitions from smooth motion to chaotic, erratic, or, as we will see later, even apparently random behavior. The quantitative description of chaos is one of the triumphs of nonlinear science.
Second, the response of a linear system to small changes in its parameters or to external stimulation is usually smooth and in direct proportion to the stimulation. But for nonlinear systems, a small change in the parameters can produce an enormous qualitative difference in the motion. Further, the response to an external stimulation can be different from the stimulation itself: for example, a periodically driven nonlinear system may exhibit oscillations at, say, one-half, one-quarter, or twice the period of the stimulation.
Third, a localized "lump," or pulse, in a linear system will normally decay by spreading out as time progresses. This phenomenon, known as dispersion, causes waves in linear systems to lose their identity and die out, such as when low-amplitude water waves disappear as they move away from the original disturbance. In contrast, nonlinear systems can have highly coherent, stable localized structures-such as the eddies in turbulent flow-that persist either for long times or, in some idealized mathematical models, for all time. The remarkable order reflected by these persistent coherent structures stands in sharp contrast to the irregular, erratic motion that they themselves can undergo.
To go beyond these qualitative distinctions, let me start with a very simple physical system-the plane pendulum-that is a classic example in at least two senses. First, it is a problem that all beginning students solve; second, it is a classic illustration of how we mislead our students about the prevalence and importance of nonlinearity.
Applying Newton's law of motion to the plane pendulum shown in Fig. 1 yields an ordinary second-order differential equation describing the time evolution:
because sin(0, +02) sin 01 + sin 02.
What happens, however, if we go to the regime of small displacements? The Taylor expansion of sin (≈ 0. +) tells us that for small the equation is approximately linear:
The general solution to the linear equation is the superposition of two terms,
1 / de ω dt
(t) ≈ 0.
sin wt + o cos wt,
where o and (de/dt)o are the angle and angular velocity at the initial time and the frequency is a constant given by w = √g/l.
Equation 3 is the mathematical embodiment of Galileo's famous observation that the frequency of a pendulum is independent of its amplitude. But in fact the result is a consequence of the linear approximation, valid only for small oscillations. If the pendulum undergoes very large displacements from the vertical, its motion enters the nonlinear regime, and one finds that the frequency depends on amplitude, larger excursions having longer periods (see "The Simple But Nonlinear Pendulum"). Of course, grandfather clocks would keep terrible time if the linear equation were not a good approximation; nonetheless, it remains an approximation, valid only for smallamplitude motion.
d20 de 8
sin 0 = cos Nt, dt2 dt 1
The distinction between the full nonlinear model of the pendulum and its linear approximation becomes substantially more striking when one studies the pendulum's response to an external stimulus. With both effects of friction and a periodic driving force added, the pendulum equation (Eq. 1) becomes
+ a +
0≈r cos Nt.
where a is a measure of the frictional force and г and N are the amplitude and frequency, respectively, of the driving force. In the regime of small displacements, this reduces to the linear equation
A closed-form solution to the linear equation can still be obtained, and the motion can be described analytically for all time. For certain values of a, г, and N, the solution to even the nonlinear equation is periodic and quite similar to that of the linear model. For other values, however, the solution behaves in a complex, seemingly random, unpredictable manner. In this chaotic regime, as we shall later see, the motion of this very simple nonlinear system defies analytic description and can indeed be as random as a coin toss.
Dynamical Systems: From Simple to Complex. Both the free pendulum and its damped, driven counterpart are particular examples of dynamical systems. The free pendulum is a conservative dynamical system-energy is constant in time—whereas the damped, driven pendulum is a dissipative system-energy is not conserved. Loosely speaking, a dynamical system can be thought of as anything that evolves in time according to a well-defined rule. More specifically, the variables in a dynamical system, such as q and p, the canonical position and momentum, respectively, have a rate of change at a given time that is a function of the values of the variables themselves at that time: ġ(t) = ƒ (q(t),p(t)) and p(t) = 8 (q(t),p(t)) (where a dot signifies differentiation with respect to time). The abstract "space" defined by these variables is called the phase
Fig. 2. The behavior of the simple pendulum is here represented by constant-energy contours in 0-0 (roughly, position-momentum) phase space. The closed curves around the origin (E <2mg/) represent librations, or periodic oscillations, whereas the open, "wavy" lines for large magnitudes of Ø (E > 2mgl) correspond to motions in which the pendulum moves completely around its pivot in either a clockwise (Ø < 0) or counterclockwise (Ø > 0) sense, causing to increase in magnitude beyond 2π. (Figure courtesy of Roger Eckhardt, Los Alamos National Laboratory.)
space, and its dimension is clearly related to the number of variables in the dynamical system.
In the case of the free pendulum, the angular position and velocity at any instant determine the subsequent motion. Hence, as discussed in "The Simple But Nonlinear Pendulum," the pendulum's behavior can be described by the motion of a point in the two-dimensional phase space with coordinates and 0 (Fig. 2). In the traditional parlance of mechanics, the free pendulum is a Hamiltonian system having "one degree ė
E = 5mgl
of freedom," since it has only one spatial variable (0) and one generalized momentum (roughly, Ø). Further, as discussed in the sidebar, this system is completely integrable, which in effect means that its motion for all time can be solved for analytically in terms of the initial values of the variables.
More typically, dynamical systems involve many degrees of freedom and thus have high-dimensional phase spaces. Further, they are in general not completely integrable. An example of a many-degree-of-freedom system particularly pertinent to our current discussion is the one first studied by Enrico Fermi, John Pasta, and Stan Ulam in the mid-fifties: a group of particles coupled together by nonlinear springs and constrained to move only in one dimension. Now celebrated as the "FPU problem," the model for the system consists of a large set of coupled, ordinary differential equations for the time evolution of the particles (see "The Fermi, Pasta, and Ulam Problem: Excerpts from 'Studies of Nonlinear Problems' "). Specifically, one particular version of the FPU problem has 64 particles obeying the equations
X¡ = (Xi+1 + X¡− 1 − 2x;) + α ((X;+1 − x¡)2 − (x; − x¡ −1)2)
E = mgl
where a is the measure of the strength of the nonlinear interaction between neighboring particles. Thus there are 64 degrees of freedom and, consequently, a 128-dimensional phase space.
Still more complicated, at least a priori, are continuous nonlinear dynamical systems, such as fluids. Here one must define dynamical variables such as the density p(x, t)—at every point in space. Hence the number of degrees of freedom, and accordingly the phase-space dimension, becomes infinite; further, the resulting equations of motion become nonlinear partial differential equations. Note that one can view these continuous dynamical systems as the limits of large discrete systems and understand their partial differential equations as the limits of many coupled ordinary differential equations.
for i = 1, 2,..., 64, (6)
We can illustrate this approach using a continuous nonlinear dynamical system that will be important in our later discussion. Hopefully, this example will pique the reader's interest, for it also indicates how elegantly perverse nonlinearity can be. The system is represented by the so-called sine-Gordon equation
where the dependent variable 0 = 0(x, t) is a measure of the response of the system at position x and time t.
Computationally, one natural way to deal with this system is to introduce a discrete spatial grid with spacing Ax such that the position at the nth point in the grid is given by xnn Ax and define 0,(t) = 0(xn, t) for n = 1, 2,...N. Using a finite difference approximation for the second derivative,
leads to a set of N coupled ordinary differential equations
(On+1(t) - 20n(t) + On-1(t)) + sin 0, (t) n = 1, 2, ... N.
This is a finite degree-of-freedom dynamical system, like the FPU problem. In particular, it is just a set of simple plane pendula, coupled together by the discretized spatial derivative. Of course, the continuous sine-Gordon equation is recovered in the limit that N → ∞ (and thus Ax 0). The perverseness of nonlinearity is that whereas the Hamiltonian dynamical system described by a finite number N of coupled ordinary differential equations is not completely integrable, the infinite-dimensional Hamiltonian system described by the continuum sine-Gordon equation is! Further, as we shall later demonstrate, the latter system possesses localized "lump" solutions-the famed solitons—that persist for all time.
Hopefully, this digression on dynamical systems has made the subtlety of nonlinear phenomena quite apparent: very simple nonlinear systems-such as the damped, driven pendulum-can exhibit chaos involving extremely complex, apparently random motions, while very complicated systems-such as the one described by the sine-Gordon equation can exhibit remarkable manifestations of order. The challenge to researchers in this field is to determine which to expect and when.
Paradigms of Nonlinearity. Before examining in some detail how this challenge is being confronted, we need to respond to some obvious but important questions. First, why study nonlinear science, rather than nonlinear chemistry, or nonlinear physics, or nonlinear biology? Nonlinear science sounds impossibly broad, too interdisciplinary, or "the study of everything." However, the absence of a systematic mathematical framework and the complexity of natural nonlinear phenomena suggest that nonlinear behavior is best comprehended by classifying its various manifestations in many different systems and by identifying and studying their common features. Indeed, both the interest and the power of nonlinear science arise precisely because common concepts are being discovered about systems in very different areas of mathematics and natural sciences. These common concepts, or paradigms, give insight into nonlinear problems in a large number of disciplines at once. By understanding these paradigms, one can hope to understand the essence of nonlinearity as well as its consequences in many fields.
Second, since it has long been known that most systems are inherently nonlinear, why has there been a sudden blossoming of interest in this field in the past twenty years or so? Why weren't many of these fundamental problems solved a century ago? On reflection, one can identify three recent developments whose synergistic blending has made possible revolutionary progress.
The first, and perhaps most crucial, development has been that of high-speed electronic computers, which permit quantitative numerical simulations of nonlinear systems. Indeed, the term experimental mathematics has been coined to describe computer-based investigations into problems inaccessible to analytic methods. Rather than simply confirming quantitatively results already anticipated by qualitative analysis,