Page images
PDF
EPUB

continued from page 212

automaton models in general, there always seems to be a deep relation between the abstract computer embodying the gas algorithm for a physical problem and the mathematical physics of the system itself. This duality property is an important one, and it is not well understood. One of the main aims of lattice gas theory is to make the underlying mathematics of dynamical evolution clearer by providing a new perspective on it. One would, for example, like to know the class of all lattice gas systems that evolve to a dynamics that is, in an appropriate sense, nearby the dynamics actually evolved by nature. Doing this will allow us to isolate what is common to such systems and identify universal mathematical mechanisms.

Engineering Design Applications. The second direction of study is highly applied. In most engineering-design situations with complicated systems, one would like to know first the general qualitative dynamical behavior taking place in some rather involved geometry and then some rough numerics. Given both, one can plot out the zoo of dynamical development within a design problem. Usually, one does not know what kinds of phenomena can occur as a parameter in the system varies. Analytic methods are either unavailable, hard to compute by traditional methods, or simply break down. Estimating phenomena by scaling or arguments depending on order-ofmagnitude dimensional analysis is often inaccurate or yields insufficient information. As a result, a large amount of expensive and scarce supercomputer time is used just to scan the parameter space of a system.

Lattice gas models can perform such tasks efficiently, since they simulate at the same speed whether the geometry and system are simple or complex. Complicated geometries and boundary conditions for massively parallel lattice gas simulators involve only altering collision rules in a region. This is easily coded and

can be done interactively with a little investment in expert systems. There is no question that for complex design problems, lattice gas methods can be made into powerful design tools.

Beyond Two Dimensions

In two dimensions there exists a singlespeed skeletal model for fluid dynamics with a regular lattice geometry. It relies on the existence of a complete tiling of the plane by a domain of sufficiently high symmetry to guarantee the isotropy of macroscopic modes in the model. In three dimensions this is not the case, for the minimum appropriate domain symmetry is icosahedral and such polyhedra do not tile three-space. If we are willing to introduce multiple-speed models, there may exist a model with high enough rotational symmetry, as in the square model with nearest and next-to-nearest neighbor interaction in two dimensions, but it is not easy to find and may not be efficient for simulations.

A tactic for developing an enlargedneighborhood, three-dimensional model, which still admits a regular lattice, is to notice that the number of regular polyhedra as a function of dimension has a maximum in four dimensions. Examination of the face-centered four-dimensional hypercube shows that a single-speed model connected to each of twenty-four nearest neighbors has exactly the right invariance group to guarantee isotropy in four dimensions. So four-dimensional single-speed models exist on a regular tiling. Three-dimensional, or regular, hydrodynamics can be recovered by taking a thin one-site slice of the four-dimensional case, where the edges of the slice are identified. Projecting such a scheme into three-dimensional space generates a twospeed model with nearest and next-tonearest neighbor interactions of the sort guaranteed to produce three-dimensional Navier-Stokes dynamics.

Such models are straightforward ex

tensions of all the ideas present in the two-dimensional case and are being simulated presently on large Cray-class machines and the Connection Machine 2. Preliminary results show good agreement with standard computations at least for low Reynolds numbers. In particular, simulation of Taylor-Green vortices at a Reynolds number of about 100 on a (128)3 universe (a three-dimensional cube with 128 cells in each direction) agrees with spectral methods to within 1 percent, the error being limited by Monte Carlo noise. The ultimate comparison is against laboratory fluid-flow experiments. As displayed at the end of Part II, threedimensional flows around flat plates have also been done.

A more intriguing strategy is to give up the idea of a regular lattice. Physical systems are much more like a lattice with nodes laid down at random. At present, we don't know how to analyze such lattices, but an approximation can be given that is intermediate between regular and random grids. Quasi-tilings are sets of objects that completely tile space but the grids they generate are not periodic. Locally, various types of rotation symmetry can be designed into such lattices, and in three dimensions there exists such a quasi-tiling that has icosahedral symmetry everywhere. The beauty of quasi-tilings is that they can all be obtained by simple slices through hypercubes in the appropriate dimension. For three dimensions the parent hypercube is six-dimensional.

The idea is to run an automaton model containing the conservation laws with as simple a rule set as possible on the sixdimensional cube and then take an appropriately oriented three-dimensional slice out of the cube so arranged as to generate the icosahedral quasi-tiling. Since we only examine averaged quantities, it is enough to do all the averaging in six dimensions along the quasi-slice and image the results. By such a method we guarantee exact isotropy everywhere in three

dimensions and avoid computing directly on the extremely complex lattices that the quasi-tiling generates. Ultimately, one would like to compute on truly random lattices, but for now there is no simple way of doing that efficiently.

The simple four-dimensional model is a good example of the limits of present super-computer power. It is just barely tolerable to run a (1000)3 universe at a Reynolds number of order a few thousand on the largest existing Cray's. It is far more efficient to compute in large parallel arrays with rather inexpensive custom machines, either embedded in an existing parallel architecture or on one designed especially for this class of problems.

Lattice Gases as Parallel
Computers

Let us review the essential features of a lattice gas. The first property is the totally discrete nature of the description: The gas is updated in discrete time steps, lattice gas elements can only lie on discrete space points arranged in a space-filling network or grid, velocities can also have only discrete values and these are usually aligned with the grid directions, and the state of each lattice gas site is described by a small number of discrete bits instead of continuous values.

The second crucial property is the local deterministic rules for updating the array in space and time. The value of a site depends only on the values of a few local neighbors so there is no need for information to propagate across the lattice universe in a single step. Therefore, there is no requirement for a hardwired interconnection of distant sites on the grid.

The third element is the Boolean nature of the updating rules. The evolution of a lattice gas can be done by a series of purely Boolean operations, without ever computing with radix arithmetic.

To a computer architect, we have just described the properties of an ideal concurrent, or parallel, computer. The iden

tical nature of particles and the locality of the rules for updating make it natural to update all sites in one operation—this is what one means by concurrent or parallel computation. Digital circuitry can perform Boolean operations naturally and quickly. Advancing the array in time is a sequence of purely local Boolean operations on a deterministic algorithm.

Most current parallel computer designs were built with non-local operations in mind. For this reason the basic architecture of present parallel machines is overlaid with a switching network that enables all sites to communicate in various degrees with all other sites. (The usual model of a switching network is a telephone exchange.) The complexity of machine architecture grows rapidly with the number of sites, usually as n logn at best with some time tradeoff and as O(n2) at worst. In a large machine, the complexity of the switching network quickly becomes greater than the complexity of the computing configuration.

In a purely local architecture switching networks are unnecessary, so twodimensional systems can be built in a two-dimensional, or planar configuration, which is the configuration of existing large-scale integrated circuits. Such an architecture can be made physically compact by arranging the circuit boards in an accordion configuration similar to a piece of folded paper. Since the type of geometry chosen is vital to the collective behavior of the lattice gas model and no unique geometry fits all of parameter space, it would be a design mistake to hardwire a particular model into a machine architecture. Machines with programmable geometries could be designed in which the switching overhead to change geometries and rules would be minimal and the gain in flexibility large (Fig. 9).

In more than two dimensions a purely two-dimensional geometry is still efficient, using a combination of massive parallel updating in a two-dimensional plane and pipelining for the extra dimen

sions. As technology improves, it is easy to imagine fully three-dimensional machines, perhaps with optical pathways between planes, that have a long mean time to failure.

The basic hardware unit in conventional computers is the memory chip, since it has a large storage capacity (256 K bytes or 1 M bytes presently) and is inexpensive, reliable, and available commercially in large quantities. In fact, most modern computers have a memorybound architecture, with a small number of controlling processors either doing local arithmetic and logical operations or using fast hashing algorithms on large look-up tables. An alternative is the local architecture described above for lattice gas simulators. In computer architecture terms it becomes very attractive to build compact, cheap, very fast simulators which are general over a large class of problems such as fluids. Such machines have a potential processing capacity much larger than the general-purpose architectures of present or foreseen vectorial and pipelined supercomputers. A number of such machines are in the process of being designed and built, and it will be quite interesting to see how these experiments in non-von Neumann architectures (more appropriately called supervon Neumann) turn out.

At present, the most interesting machine existing for lattice gas work is the Connection Machine with around 65,000 elementary processors and several gigabytes of main memory. This machine has a far more complex architecture than needed for pure lattice-gas work, but it was designed for quite a different purpose. Despite this, some excellent simulations have been done on it. The simulations at Los Alamos were done mainly on Crays with SUN workstations serving as code generators, controllers, and graphical units. The next generation of machines will see specialized lattice gas machines whether parallel, pipelined, or some combination, running either against

ARCHITECTURE OF THE LATTICE GAS SIMULATOR

Fig. 9. The lattice gas code is a virtual machine in the sense that the way the code works is exactly the way to build a machine.

(a) The basic processor unit in a lattice gas simular has five units: (1) a memory unit that stores the state at each node of the lattice grid; (2) a propagation unit that advances particles from one node to the next; (3) a scattering unit that checks the state at each node and implements the scattering rules where appropriate; (4) an averaging unit that averages velocities over a preassigned region of the lattice universe; and (5) an output and display unit.

(b) Processors are arranged in a parallel array. Each processor operates independently except at nodes on shared boundaries of the lattice gas universe.

(c) Processor units are overlaid by units that can alter the geometry of the lattice, the collision rules and boundary conditions, and the type of averaging.

[merged small][merged small][merged small][merged small][merged small][graphic][merged small][merged small][merged small][merged small][merged small]

Connection Machine style architectures or using them as analyzing engines for processing data generated in lattice gas "black boxes." This will be a learning experience for everyone involved in massive simulation and provide hardware engines that will have many interesting physics and engineering applications.

Unfortunately, fast hardware alone is not enough to provide a truly useful exploration and design tool. A large amount of data is produced in a typical many degree of freedom system simulation. In three dimensions the problems of accessing, processing, storing, and visualizing such quantities of data are unsolved and are really universal problems even for standard supercomputer technology. As the systems we study become more com

plex, all these problems will also. It will take innovative engineering and physics approaches to overcome them.

Conclusion

To any system naturally thought of as classes of simple elements interacting through local rules there should correspond a lattice-gas universe that can simulate it. From such skeletal gas models, one can gain a new perspective on the underlying mathematical physics of phenomena. So far we have used only the example of fluids and related systems that naturally support flows. The analysis of these systems used the principle of maximum ignorance: Even though we know the system is deterministic, we disregard

that information and introduce artificial probabilistic methods. The reason is that the analytic tools for treating problems in this way are well developed, and although tedious to apply, they require no new mathematical or physical insight.

A deep problem in mathematical physics now comes up. The traditional methods of analyzing large probabilistic systems are asymptotic perturbation expansions in various disguises. These contain no information on how fast large-scale collective behavior should occur. We know from computer simulations that local equilibrium in lattice gases takes only a few time steps, global equilibrium occurs as fast as sound propagation will allow, and fully developed hydrodynamic phenomena, including complex instabil

ities, happen again as fast as a traverse of the geometry by a sound wave. One might say that the gas is precociously asymptotic and that this is basically due to the deterministic property that conveys information at the greatest possible speed. Methods of analyzing the transient and invariant states of such complex multidimensional cellular spaces, using determinism as a central ingredient, are just beginning to be explored. They are non-perturbative. The problem seems as though some of the methods of dynamical systems theory should apply to it, and there is always the tempting shadow of renormalization-group ideas waiting to be applied with the right formalism. So far we have been just nibbling around the edges of the problem. It is an extraordinarily difficult one, but breaking it would provide new insight into the origin of irreversible processes in nature.

The second feature of lattice gas models, for phenomena reducible to natural skeletal worlds, is their efficiency compared to standard computational methods. Both styles of computing reduce to inventing effective microworlds, but the conventional one is dictated and constrained by a limited vocabulary of difference techniques, whereas the lattice gas method designs a virtual machine inside a real one, whose architectural structure is directly related to physics. It is not a priori clear that elegance equals efficiency. In many cases, lattice gas methods will be better at some kinds of problems, especially ones involving highly complex systems, and in others not. Its usefulness will depend on cleverness and the problem at hand. At worst the two ways of looking at the microphysics are complementary and can be used in various mixtures to create a beautiful and powerful computational tool.

We close this article with a series of conjectures. The image of the physical world as a skeletal lattice gas is essentially an abstract mathematical framework for creating algorithms whose dynamics

spans the same solution spaces as many physically important nonlinear partial differential equations that have a microdynamical underpinning. There is no intrinsic reason why this point of view should not extend to those rich nonlinear systems which have no natural many-body picture. The classical Einstein-Hilbert action, phrased in the appropriate space, is no more complex than the Navier-Stokes equations. It should be possible to invent appropriate skeletal virtual computers for various gauge field theories, beginning with the Maxwell equations and proceeding to non-Abelian gauge models.

Quantum mechanics can perhaps be implemented by using a variation on the stochastic quantization formulation of Nelson in an appropriate gas. When such models are invented, the physical meaning of the skeletal worlds is open to interpretation. It may be they are only a powerful mathematical device, a kind of virtual Turing machine for solving such problems. But it may also be that they will provide a new point of view on the physical origin and behavior of quantum mechanics and fundamental fieldtheoretic descriptions of the world.■

Further Reading

G. E. Uhlenbeck and G. W. Ford. 1963. Lectures in Statistical Mechanics. Providence, Rhode Island: American Mathematical Society.

Paul C. Martin. 1968. Measurements and Correlation Functions. New York: Gordon and Breach Science Publishers.

Stewart Harris. 1971. An Introduction to the Theory of the Boltzmann Equation. New York: Holt, Rinehart, and Winston.

E. M. Lifshitz and L. P. Pitaevskii. 1981. Physical Kinetics. Oxford: Pergamon Press.

Stephen Wolfram, editor. 1986. Theory and Applications of Cellular Automata. Singapore: World Scientific Publishing Company.

L. D. Landau and E. M. Lifshitz. 1987. Fluid Mechanics, second edition. Oxford: Pergamon Press.

[merged small][merged small][merged small][merged small][merged small][merged small][merged small][graphic]

Brosl Hasslacher was born and grew up in Manhattan. He earned his Ph.D. in theoretical physics (quantum field theory) under Daniel Friedman, the inventor of supergravity, at State University of New York, Stonybrook. Most of his physics career has been spent at the Institute for Advanced Study in Princeton and at Caltech, with stops at the University of Illinois, École Normale Supérieure in Paris, and CERN. He is currently a staff physicist at the Laboratory. Says Brosl, "I spend essentially all my time thinking about physics. Presently I'm working on nonlinear field theories, chaotic systems, lattice gases, the architectures of very fast parallel computers, and the theory of computation and complexity."

[graphic][merged small][graphic]

by David K. Campbell

No tribute to the legacy of Stan Ulam would be complete without a discussion of "nonlinear science," a growing collection of interdisciplinary studies that in the past two decades has excited and challenged researchers from nearly every discipline of the natural sciences, engineering, and mathematics. Through his own research Stan played a major role in founding what we now call nonlinear science, and through his encouragement of the work of others, he guided its development. In this survey article I will try to weave the thread of Stan's contributions into the pattern of recent successes and current challenges of nonlinear science. At the same time I hope to capture some of the excitement of research in this area.

Introduction

Let me start from a very simple, albeit circular, definition: nonlinear science is the study of those mathematical systems and natural phenomena that are not linear. Ever attuned to the possibility of bons mots, Stan once remarked that this was "like defining the bulk of zoology by calling it the study of 'non-elephant animals'." His point, clearly, was that the vast majority of mathematical equations and natural phenomena are nonlinear, with linearity being the exceptional, but important, case.

Linear versus Nonlinear. Mathematically, the essential difference between linear

[ocr errors]
« PreviousContinue »