Page images
PDF
EPUB
[blocks in formation]

where the m's are the particle masses. Because of the symmetry of the measure defined by Eqs. 11 and 12 under the interchange of the p; 's, one can easily show that the average kinetic energy (p?/2m;) is independent of i. Usually one uses that fact to define a temperature T via (p2/2m) = kT /2 (where k is the Boltzmann constant). Such considerations can be extended to the normal modes of a lattice, which will be discussed later, and are generically referred to as the equipartition of energy.

Mathematical Results. Having formulated the mathematical problem, it may be of interest to state briefly what rigorous results have been obtained so far about the circumstances under which a flow is fully ergodic.

i) Oxtoby and Ulam proved in 1941 that in a bounded phase space the continuous ergodic transformations are everywhere dense in the space of all continuous measurepreserving transformations. In other words, a topology can be chosen such that ergodic transformations form the "bulk" of the whole space of continuous measure-preserving maps. This theorem says nothing about the measure of the ergodic transformations, which may even be vanishing. (See page 110 in "Learning from Ulam.") A corresponding theorem stating an analogous property of a real dynamical system with a finite number of degrees of freedom does not exist, and in fact the KAM theorem proves the contrary (see below). It is also known that Hamiltonian flows are quite rare among measure-preserving maps, and therefore the Oxtoby and Ulam result guarantees nothing about the density of ergodic Hamiltonian flows in the space of all Hamiltonian flows.

ii) For finite N the Kolmogorov-Arnold-Moser (KAM) theorem (see Arnold and Avez 1968) guarantees that the ergodic hypothesis is violated for a certain class of systems. The theorem considers a completely integrable system (M = N in Eq. 7) and its response to an arbitrary, weak nonlinear perturbation. By a canonical transformation one can show that a completely integrable system with N degrees of freedom is equivalent to N decoupled harmonic oscillators; hence it is a linear system, and its motion in phase space occurs on hypertori rather than on the whole phase space. The KAM theorem states that in the phase space of a weakly nonintegrable (weakly nonlinear) Hamiltonian, some motions still are restricted to tori, and these tori occupy a nonzero measure of the phase space. (Figure 1 shows a typical structure of the

PHASE SPACE OF A WEAKLY
NONINTEGRABLE HAMILTONIAN

SYSTEM

Fig. 1. The system has four degrees of freedom, but conservation of energy allows us to display the phase space in three dimensions, which represent the variables x, y, and y. The phase space contains nested invariant tori on which motion is quasiperiodic so that a single orbit covers a torus densely. The gaps between the tori are chaotic regions in which the orbits appear as random as the toss of a coin. Since the nested tori have a finite measure in the phase space, this Hamiltonian system violates the ergodic hypothesis.

[merged small][merged small][merged small][merged small][ocr errors]

vi) There exists no satisfactory formulation of the ergodic hypothesis for continuous media (field theory), since it is not known how to generalize the microcanonical measure to systems with an infinite number of degrees of freedom, especially when the total energy of the system is finite. It is interesting that while appropriate ensemble averages have not been defined, the existence of global solutions (in time), and therefore the existence of time averages, for several interesting field theories (such as classical electrodynamics and Yang-Mills theories) has been established (Eardley and Moncrief

[graphic]

1982).

In conclusion, from a mathematical point of view, the ergodic hypothesis has proved to be one of the most difficult problems in the last hundred years or so. Only two flows, both billiards, have been proven to be ergodic. Perhaps today's computers will speed up the rate of analytical progress by helping our intuition about the nature of the flow.

The Physics of the Ergodic Hypothesis

Next I wish to analyze the ergodic hypothesis from a physical point of view. Undoubtedly, a dynamical approach to a physical system with many degrees of freedom, such as a gas, is impossible, and a statistical one must be developed. In doing so one must endeavor to capture the right physics. If the attempt has been really successful, the theory will withstand experimental scrutiny. But what should be done if the predictions go astray, as did the predictions of classical statistical mechanics for blackbody radiation? A sensible approach is to go back and examine what fundamental assumptions were made, which is what I shall do now.

The first question that must be settled is what should be considered "the system." Indeed the instruction in statistical mechanics is to integrate over all canonical positions and momenta with a certain measure. However, one must decide which degrees of freedom to include. For instance, take the case of the diatomic gas. Each molecule has two atoms, each atom has its own electrons and nucleus, and the latter in turn is made of quarks and gluons, say. Moreover, since the constituents are charged, they are

coupled to the electromagnetic field inside the container (and also to the gravitational field). Probably most readers will think that this is not a serious question: at a certain temperature only certain degrees of freedom are excited, and these are the only ones to be integrated over. Hidden within this superficially sensible-sounding answer is one of two extremely important assumptions:

i) The ergodic hypothesis is strictly false, so that certain degrees of freedom, although dynamically coupled, never get excited and act as spectators to the thermal equilibrium that sets in for the remaining degrees of freedom.

ii) Or, the system dynamically develops largely different time scales, and the number of degrees of freedom that are more or less in equilibrium keeps increasing with time.

In either case the use of statistical mechanics becomes more subtle, since only by gaining a good grasp of the underlying dynamics can one decide what degrees of freedom are relevant in certain circumstances. In particular, there is no a priori reason to believe that the contributions to the specific heat of the vibrations and the rotations of a diatomic gas ought to be equal at all temperatures and during a typical time of observation, as was assumed in the classical predictions of statistical mechanics. Neither is there any reason to predict the Rayleigh-Jeans distribution (Fig. 2) for blackbody radiation (which assumes the equipartition of energy between all modes of the electromagnetic field), since some modes of the cavity may be effectively decoupled (case i above) or so weakly coupled that they haven't had time to thermalize (case ii). Thus the standard examples for the breakdown of classical statistical mechanics may reflect an inappropriate application of the ergodic hypothesis rather than a need for quantization, as is usually argued in physics textbooks.

The second important question that must be addressed in deciding the relevance of the ergodic hypothesis for physics is why we are using a statistical description in a given physical situation. Consider, for instance, the measurement of the specific heat of a diatomic gas. Typically one lets the gas "reach equilibrium" with a reservoir at a given temperature and then makes a certain macroscopic measurement during a certain time interval. To obtain reasonable statistics, the measurement is repeated several times. Clearly the process just described involves three types of averaging at the molecular dynamics level:

i) over initial conditions (each repetition of the measurement involves a different set of initial conditions);

ii) over time (each measurement extends over a certain time, during which the gas evolves as a dynamical system); and

iii) over microscopic degrees of freedom (this type of averaging is inherent in the measurement of macroscopic variables).

Before analyzing in detail the likely statistical relevance of each of these averaging operations, let me hasten to say that clearly only the averaging over time has anything to do with the ergodic hypothesis. Those physicists who believe that the ergodic hypothesis is not important for the foundations of statistical mechanics dismiss the

BLACKBODY RADIATION AT 1600 K
Fig. 2. Theoretical predictions and exper-
imental data for the power radiated by a
blackbody at 1600 K. The classical Rayleigh-
Jeans law, u(v, T) = (8π/c3)v2kT, is based
on equipartition of energy among all the
modes of an electromagnetic field. The total
(kinetic plus potential) energy in each mode
is KT, and the number of modes in the fre-
quency interval (, v + dv) is 8πv2 / c3, which
is proportional to 2. The quantum Planck
law, in agreement with experiment, yields a
peaked distribution that decreases rapidly
with wavelength. The Planck law is based
on the assumption that the energy in each
mode is quantized; that is E = nhy, where
n is an integer and h is Planck's constant.

2

-2

[merged small][merged small][merged small][ocr errors][merged small][merged small][merged small]

The FPU
Problem

Excerpts from “Studies of
Nonlinear Problems" by
Fermi, Pasta, and Ulam

T

his report is intended to be the first one of a series dealing with the behavior of certain nonlinear physical systems where the nonlinearity is introduced as a perturbation to a primarily linear problem. The behavior of the systems is to be studied for times which are long compared to the characteristic periods of the corresponding linear problems.

The problems in question do not seem to admit of analytic solutions in closed form, and heuristic work was performed numerically on a fast electronic computing machine (MANIAC I at Los Alamos).* The ergodic behavior of such systems was studied with the primary aim of establishing, experimentally, the rate of approach to the equipartition of energy among the various degrees of freedom of the system. Several problems will be considered in order of increasing complexity. This paper is devoted to the first one only.

We imagine a one-dimensional continuum with the ends kept fixed and with forces acting on the elements of this string. In addition to the usual linear term expressing the dependence of the force on the displacement of the element, this force contains higher order terms. For the purposes of numerical work this continuum is replaced by a finite number of points (at most 64 in our actual computation) so that the partial differential equation defining the motion of this string is replaced by a finite number of total differential equations. ...

The solution to the corresponding linear problem is a periodic vibration of the

*We thank Miss Mary Tsingou for efficient coding of the problems and for running the computations on the Los Alamos MANIAC machine.

string. If the initial position of the string is, say, a single sine wave, the string will oscillate in this mode indefinitely. Starting with the string in a simple configuration, for example in the first mode (or in other problems, starting with a combination of a few low modes), the purpose of our computations was to see how, due to nonlinear forces perturbing the periodic linear solution, the string would assume more and more complicated shapes, and, for t tending to infinity, would get into states where all the Fourier modes acquire increasing importance. In order to see this, the shape of the string, that is to say... [its displacement,] and the kinetic energy were analyzed periodically in Fourier series. ...

Let us say here that the results of our computations show features which were, from the beginning, surprising to us. Instead of a gradual, continuous flow of energy from the first mode to the higher modes, all of the problems show an entirely different behavior. Starting in one problem with a quadratic force and a pure sine wave as the initial position of the string, we indeed observe initially [see figures on next page] a gradual increase of energy in the higher modes as predicted (e.g., by Rayleigh in an infinitesimal analysis). Mode 2 starts increasing first, followed by mode 3, and so on. Later on, however, this gradual sharing of energy among successive modes ceases. Instead, it is one or the other mode that predominates. For example, mode 2 decides, as it were, to increase rather rapidly at the cost of all other modes and becomes predominant. At one time, it has more energy than all the others put together! Then mode 3 undertakes this role. It is only the first few modes which exchange energy among themselves and they do this in a rather regular fashion. Finally, at a later time mode 1 comes back to within one per cent of its initial value so that the system seems to be almost periodic. All our problems have at least this one feature in common. Instead of gradual increase of all the higher modes, the energy is exchanged,

essentially, among only a certain few. It is, therefore, very hard to observe the rate of "thermalization" or mixing in our problem, and this was the initial purpose of the calculation.

If one should look at the problem from the point of view of statistical mechanics, the situation could be described as follows: the phase space of a point representing our entire system has a great number of dimensions. Only a very small part of its volume is represented by the regions where only one or a few out of all possible Fourier modes have divided among themselves almost all the available energy. If our system with nonlinear forces acting between the neighboring points should serve as a good example of a transformation of the phase space which is ergodic or metrically transitive, then the trajectory of almost every point should be everywhere dense in the whole phase space. With overwhelming probability this should also be true of the point which at time t = 0 represents our initial configuration, and this point should spend most of its time in regions corresponding to the equipartition of energy among various degrees of freedom. As will be seen from the results this seems hardly the case.

In a linear problem the tendency of the system to approach a fixed "state" amounts, mathematically, to convergence of iterates of a transformation in accordance with an algebraic theorem due to Frobenius and Perron. ... Such behavior is in a sense diametrically opposite to an ergodic motion and is due to a very special character, linearity of the transformations of the phase space. The results of our calculation on the nonlinear vibrating string suggest that in the case of transformations which are approximately linear, differing from linear ones by terms which are very simple in the algebraic sense (quadratic or cubic in our case), something analogous to the convergence to eigenstates may obtain. ...

Editor's note: The interpretation of the unexpected recurrences is now different. See David Campbell's discussion on page 244.

[graphic]

statistical relevance of time averaging for macroscopic observables.

The averaging over initial conditions should not be of much consequence statistically. Indeed, even if one assumes that the gas is simply a collection of hard spheres (with no internal structure), the gas still constitutes a dynamical system with somewhere on the order of 1023 degrees of freedom. Unless the initial state is very special or the time of observation very short, repeating an experiment ten or a hundred times should not have important consequences. In fact, a typical measurement lasts at least a few minutes; during such a time interval each molecule undergoes, at room temperature and normal pressure, about 107 collisions. Hence the number of states through which the gas passes dynamically (in time) is much larger than that due to the repetition of the experiment. Of course, as one lowers the temperature or the pressure, the collisions become more rare, so the time of observation must be increased to avoid large fluctuations in individual measurements.

Perhaps the most important averaging is the "coarse graining" involved in obtaining macroscopic variables. Two large numbers are involved in a typical measurement: the total number of degrees of freedom of the system and the number of degrees of freedom that are averaged together to obtain a macroscopic variable. The second number appears naturally in a system containing a large number of indistinguishable constituents. For instance, in determining the local density in a gas, one does not care about the trajectory of any single particle but rather about the average number of trajectories crossing a macroscopic volume at any time. Use of the laws of large numbers (see "A Tutorial on Probability, Measure, and the Laws of Large Numbers") in this context guarantees that, in spite of the fact that the underlying dynamics may be time-reversal invariant, macroscopic variables (almost) always tend to relax to their equilibrium values. In other words, because of the large numbers involved in specifying macroscopic variables, the macroscopically specified state of the system has overwhelming probability to evolve towards the equilibrium state, even if the microscopic dynamics is time-reversal invariant. Hence, an arrow of time exists at the macroscopic level even if it does not at the microscopic level. This frequently stated paradox of statistical mechanics is a straightforward consequence of the laws of large numbers.

Confronting the Ergodic Hypothesis with Experiment

Having discussed the types of averaging involved in a real experiment, let us reconsider the experimental circumstances under which classical statistical mechanics could be expected to work. Historically, statistical mechanics appeared in connection with the endeavors to study, for example, very nearly ideal gases. (In an ideal gas the molecules are free except for occasional elastic collisions with each other or with the walls of the container.) Its foundations were statistical (predictions were based on considering an ensemble of systems, primarily the microcanonical or the canonical ensemble), in spite of the efforts of Boltzmann and Maxwell to give it a dynamical basis by invoking the ergodic hypothesis.

The fundamental assumption of statistical mechanics for an isolated system is the equal a priori probability on the hypersurface (in phase space) determined by all the conservation laws (Eq. 7). This probability measure defines the microcanonical ensemble. If the underlying dynamics is derivable from a Hamiltonian, by Liouville's

[ocr errors]
« PreviousContinue »