Page images
PDF
EPUB

classic example of an unexpected prediction of Einstein's relativity. So there must be something about quantum mechanics that "rubs us" the wrong way. The question is what?

Perhaps the best way in which the strange predictions of quantum mechanics can be quantified is a certain inequality first formulated by Bell (Bell 1965). For illustration, consider a positronium atom, with total angular momentum zero, that decays into an electron and a positron. Suppose we let the electron and the positron drift apart and then measure their spin components along two axes by passing them through two magnetic fields. Now in quantum mechanics the state of the positronium atom is a linear superposition of spin-up and spin-down states: (| ↑)+ | ↓) − − | ↓)+ | ↑) −)/√2. We could therefore ask ourselves whether in each passage through the apparatus the electron and the positron have a well-defined spin (up or down), albeit unknown to us. Some elementary probabilistic reasoning shows immediately that if that were the case, the probabilities for observing up or down spins along given axes would have to obey Bell's inequality. The experimentally measured probabilities violate this inequality, in agreement with the predictions of quantum mechanics. So the uncertainties in quantum mechanics are not due to incomplete knowledge of some local hidden variables. What is even stranger is that in a refinement of the experiment in which the axes of the magnetic fields are changed in an apparently random fashion (Aspect, Grangier, and Roger 1982), the violation of Bell's inequality persists, indicating correlations between space-like events (that is, events that could be causally connected only by signals traveling faster than the speed of light). While in this experiment no information is being transmitted by such superluminal signals, and hence no conflict with special relativity exists, the implication of space-like correlations hardly alleviates the physicist's uneasiness about the correct interpretation of quantum mechanics. Of course this uneasiness is not felt by all physicists. Particle physicists, for instance, take the validity of quantum mechanics for granted. To wit, anybody who reads Time knows that they, having "successfully" unified weak, electromagnetic, and strong interactions within the framework of quantum field theory, are presently subduing the last obstacle, quantizing gravity by unifying all interactions into a quantum field theory of strings. And they are doing so in spite of the fact that the existence of classical gravitational radiation, let alone that of the quantized version (gravitons), has not been established experimentally.

An even older controversy, which in the opinion of some physicists has long ceased to be an interesting problem, concerns the ergodic hypothesis, the subject of this discussion. I will try to elaborate on this topic as fully as my knowledge will allow, but, by way of introduction, let me just say that the ergodic hypothesis is an attempt to provide a dynamical basis for statistical mechanics. It states that the timeaverage value of an observable-which of course is determined by the dynamics—is equivalent to an ensemble average, that is, an average at one time over a large number of systems all of which have identical thermodynamic properties but are not identical on the molecular level. This hypothesis was advanced over one hundred years ago by Boltzmann and Maxwell while they laid the foundations of statistical mechanics (Boltzmann 1868, 1872 and Maxwell 1860, 1867). The general consensus is that the hypothesis, still mathematically unproven, is probably true yet irrelevant for physics. The purpose of this article is to review briefly the status of the ergodic hypothesis from mathematical and physical points of view and to argue that the hypothesis is of interest

not only for statistical mechanics but for physics as a whole. Indeed the mystery of quantum mechanics itself may possibly be unraveled by a deeper understanding of the ergodic hypothesis. This last remark should come as no surprise. After all, the birth of quantum mechanics was brought about by the well-known difficulties of classical statistical mechanics in explaining the specific heats of diatomic gases and the blackbody radiation law. I shall elaborate on the possible connection between the ergodic hypothesis and the resolution of these major puzzles in the last part of this article.

The Mathematics of the Ergodic Hypothesis

I shall begin my presentation with the easier part of the problem, the mathematical formulation of the ergodic hypothesis. Consider some physical system with N degrees of freedom and let q1,..., qn be its positions and p1,..., PN its momenta. We shall assume that the specification of the set of initial positions {0} and momenta {po} at time t = 0 uniquely specifies the state of the system at any other time t via the equations of motion:

and

Əq¡ (t) _ Əqi ({q(t)}, {p(t)})

=

[blocks in formation]

The time evolution of the system can be represented as a path, or trajectory, through phase space, the region of allowed states in the space defined by the 2N independent coordinates {q} and {p}. An observable of this system O is an arbitrary function of {q} and {p}, O({q}, {p}). The time-average value of some observable O({q}, {p}) along the phase-space trajectory starting at t = 0 at {q0}, {po} is defined as

[merged small][merged small][ocr errors][merged small][merged small][merged small]

81

Obviously the integral in Eq. 2 makes sense only for suitable functions of {q} and {p}, which are the only ones we shall consider. In fact we shall further restrict the class of observables to those for which lim OT exists. (This is not a severe restriction; for instance, if O({q(t)}, {p(t)}) is bounded along the trajectory, the limit clearly exists.) The notation in Eq. 2 makes clear that, a priori, time-average values depend upon the initial conditions {0} and {po}.

As time passes, the trajectory of the system winds through the phase space. If the motion takes place in a bounded domain, one might expect that as T → ∞ the average values of most observables settle down to some sort of equilibrium values (time-independent behavior). What would the phase-space trajectory look like if the system approached dynamical equilibrium? One could characterize it by saying that the frequency with which different neighborhoods of the phase space are visited converges to some limiting value μ({q}, {p}) at each point in phase space. That such limiting frequencies exist under quite general circumstances was shown in 1927 by Birkhoff (see Birkhoff 1966) and constitutes the first step towards bridging the gap between

dynamics and statistics. Indeed, Birkhoff's theorem allows one to replace time averages by ensemble averages, defined as follows. Let the state of the system be specified by the sets {q} and {p}, and postulate that the probability for the system to be in the neighborhood of the state ({q}, {p}) is

[blocks in formation]

That is, the general form of the probability measure is the time-independent frequency μ times the volume element of the phase space. A particular probability measure specifies completely a particular ensemble of representative systems; that is, it gives the fraction of systems in the ensemble that are in the state ({q}, {p}). In keeping with usual probabilistic notions, I shall assume that the probability measure has been normalized so that the integral of the probability measure for all possible states ({q}, {p}) is unity,

[blocks in formation]

Birkhoff's theorem states that, if the motion is restricted to a bounded domain, then for many initial conditions there exists an ensemble (probability measure) such that the time-average value of the observable equals an ensemble average:

[merged small][ocr errors][merged small]

Please note that Eq. 6 indicates that the time-average value of O({q}, {p}) becomes independent of the initial conditions {q0} and {po} as T→ ∞. As already mentioned above, this is true for many, but generally not all, initial conditions. If Eq. 6 is true for almost all initial conditions (for all points in the allowed phase space except for a set of measure zero), the flow through phase space described by Eqs. 1 must be fully ergodic; that is, for almost all initial conditions {q0}, {po} and with probability 1, the flow passes arbitrarily close to any point {q}, {p} in phase space at some later time. The assumption in statistical mechanics that time averages of macroscopic variables can be replaced by ensemble averages (that is, that Eq. 6 holds) is therefore called the ergodic hypothesis.

In general, however, the flow through the phase space defined by the equations of motion may not cover the whole of the allowed phase space for almost all initial conditions. Instead the allowed phase space is divided into several "ergodic" components, that is, subregions ; of the phase space such that if the flow starts in subregion N, then there exists a time t at which the flow will touch any given neighborhood in the set of neighborhoods covering ;. Moreover the flow remains in ; for all time. Consequently, time-average values do depend on knowing in which "ergodic component"

the system was started.

The Ergodic Hypothesis and the Equipartition of Energy. In statistical mechanics the ergodic hypothesis, which proposes a connection between dynamics and statistics, is sometimes regarded as unnecessary, and attention is placed instead on the assumption that all allowed states are equally probable. In this paper I emphasize that when time averaging is relevant to a problem, the assumption of equal a priori probabilities is essentially equivalent to the ergodic hypothesis (Eq. 6). To see this I will restate the general problem and gradually narrow it down to the context of classical statistical mechanics.

In general, given a phase space N and a probability density μ({q}, {p}), one has defined an ensemble. Furthermore one can consider a map of the phase space onto itself. (An example is provided by Eqs. 1, which are really a set of maps indexed by the continuous parameter t). A natural question to ask is whether the probability

[blocks in formation]

is invariant under this map. As we have said, Birkhoff's theorem states that under many circumstances such invariant measures exist and allow the replacement of time averages by ensemble averages. Thus the existence and construction of all the invariant measures for a certain flow is the first of two mathematical problems related to the ergodic hypothesis.

As stated so far this problem is much more general than the one of interest to Boltzmann and Maxwell in connection with the foundations of statistical mechanics. Indeed, the existence of a probability measure left invariant by a given set of maps can be investigated whether or not the sets {q} and {p} defining the maps are canonically conjugate variables derivable from a Hamiltonian, whether the set of maps is discrete or continuous, etc. At present the construction of such invariant measures is being actively pursued by researchers studying dynamical systems, especially dissipative ones such as those relevant to the investigation of turbulence (for example, systems described by the Navier-Stokes equations). (See the section Geometry, Invariant Measures, and Dynamical Systems in "Probability and Nonlinear Systems.")

Of particular interest in statistical mechanics, especially in connection with the ergodic hypothesis, is the invariant measure appropriate for describing physically isolated systems. The ensemble specified by this measure is traditionally called the microcanonical ensemble. The systems of interest are characterized by nonlinear interactions among the constituents and by a very large number of degrees of freedom. Generically, certain observables of a physically isolated system, such as the total energy and electric charge, are conserved; that is, they remain constant at their initial values. So let {1;({q}, {p})}, i = 1,..., M be the complete set of independent, conserved observables of a system with N degrees of freedom. Obviously M2N. Since the flow in Eqs. 1 obeys all these conservation laws, it is clear that any invariant measure of the flow must be compatible with all the conservation laws. Consequently the probability measure must contain a delta function for each conserved quantity so that the probability is nonzero only when the conservation law is satisfied. (A delta function (x − xo) can be thought

of as having the value for x values between xo € and xo + € for any e, no matter how small, and the value 0 everywhere else. The integral of a delta function is thus equal to unity.)

The fundamental hypothesis in statistical mechanics is that for isolated systems of physical interest (complicated nonlinear systems with many degrees of freedom), the

[blocks in formation]

is left invariant by the equations of motion and is the only such measure. In other words, the hypothesis states that the microcanonical ensemble is defined by the measure in Eq. 7. Note that the probablity density in Eq. 7 is flat; that is, all regions of phase space consistent with the conservation laws are equally probable.

To understand why this assumption of equal a priori probabilities is, in effect, a restatement of the ergodic hypothesis, one must realize that the only systems under consideration in classical statistical mechanics are Hamiltonian systems (systems for which the equations of motion can be derived from a Hamiltonian principle). The existence of a Hamiltonian function H({q}, {p}) means that the equations describing the flow through phase space, Eqs. 1, can be written in the form

[blocks in formation]

The existence of a simplectic structure (the Poisson bracket) is a very restrictive condition on the flow, much more so than the mere conservation of the energy. Indeed, through Liouville's theorem, it guarantees the conservation of the phase-space volume element

[blocks in formation]

and thus it proves that the measure in Eq. 7 is invariant under Hamiltonian flows. Thus the first mathematical problem of constructing an invariant measure is solved for Hamiltonian systems. Consequently the ergodic hypothesis (Eq. 6) is automatically satisfied provided that the flow is fully ergodic. Proving that the flow is fully ergodic is the second mathematical problem related to the ergodic hypothesis and is the one that remains to be solved for Hamiltonian systems. If in fact the flow is not ergodic, then the assumption of equal a priori probabilities would not describe the time-average behavior of the system, at least not for all possible observables.

Note that if the flow is fully ergodic and all allowed states are equally probable,

« PreviousContinue »