Page images


Didier Besnard is a frequent visitor to Los Alamos from the Commissariat à l'Energie Atomique in France. He is the recipient of several advanced degrees in the fields of applied mathematics and engineering and has been employed since 1979 at the Centre d'Etudes de Limeil-Valenton, near Paris, where he is head of the Groupe Interaction et Turbulence. He is especially active in the analytical and numerical solution of problems in plasma physics and the high-speed dynamics of compressible materials. At Los Alamos he has worked in the Center for Nonlinear Studies and been a Visiting Scientist in the Theoretical Fluid Dynamics Group, collaborating in the development of turbulence transport theories, especially for multiphase flows and fluids with large density variations. An avid rock climber, hiker, and skier, he enjoys frequent excursions to the backwoods areas of both New Mexico and the French Alps. His wife, Anne, and daughter, Gaelle, join him in being a truly international family, spending at least a month in New Mexico each summer.

Francis H. Harlow came to Los Alamos in September 1953 after receiving his Ph.D. from the University of Washington and has been a physicist in the Theoretical Division during his entire employment at the Laboratory. Special interests include fluid dynamics, heat transfer, and the numerical solution of continuum dynamics problems. He was Leader of the Fluid Dynamics Group for fourteen years and became a Laboratory Fellow in 1981. His extensive publications describe a variety of new techniques for solving fluid flow problems and discuss the basic physics and the application to practical problems. Northern New Mexico has served as a strong stimulus to his collateral activities in paleontology, archeology, and painting. Writings include one book on fossil brachiopods and four on the Pueblo Indian pottery of the early historic period. His paintings have been the subject of several one-man shows and are included in hundreds of collections throught the United States.

Left to right: Norman L. Johnson, Rick Rauenzahn, Francis H. Harlow, Jonathan Wolfe, and Didier Besnard.

Norman L. Johnson came to Los Alamos in the summer of 1981 as a graduate student working on cavity radiation and the numerical solution of highly nonlinear equations within a thermal-fluid finite element code. In the spring of 1983, he completed his Ph.D. in free Lagrangian methods and kinetic theory for viscoelastic flows at the University of Wisconsin in Madison. He returned to Los Alamos as a postdoctorate and developed an interest in modeling the complex physics of hypervelocity impact phenomena. Concurrently with his stay at Los Alamos, he held a position at the National Bureau of Standards in Boulder, Colorado, working on facilitated transport across membranes. In 1985 he became a permanent staff member in the Theoretical Fluid Dynamics Group and is currently working on the development of a local intense energy source and on numerical methods for flows with high distortions. Special interests include impact phenomena and the rheology of viscoelastic fluids and suspensions. In off hours he is currently learning how to juggle, as well as continuing his enjoyment of modern dance and the Japanese language and culture.

Rick Rauenzahn came to Los Alamos in 1982 as a graduate student in the Earth and Space Sciences Division to research the fundamentals of thermal spallation drilling. Prior to this, Rick had received his B.S. from Lehigh University in 1979 and his S.M. from MIT in 1980, both in chemical engi

neering. Two one-year stints in the chemical industry prepared him to enter MIT once again, and, in 1986, after doing thesis work at the Laboratory, he received his Ph.D. in Chemical Engineering. Since then, he has been in the Theoretical Fluid Dynamics Group, working on the fundamentals of turbulence modeling, mixing at interfaces, and twodimensional code development.

Jonathan Wolfe is an undergraduate student in the Mechanical Engineering Department at the University of New Mexico, participating in the Cooperative Education Program with Los Alamos. In a cross-country pursuit of his education, Jon came to New Mexico in 1983 and to Los Alamos soon afterwards. Working for the Earth and Space Sciences Division, he contributed to the development of innovative computational techniques for the solution of large, sparse matrix systems, and worked as a field engineer ("grunt") at the Fenton Hill geothermal test site. More recently, he has been a member of the Theoretical Fluid Dynamics Group, collaborating in the numerical investigation of problems in turbulence and multiphase flow. Jon has fallen in love with the New Mexico outdoors, and spends lots of time windsurfing, motorcycling, and playing ultimate frisbee, despite the threat of impending graduation and responsibility.

[ocr errors][merged small]
[ocr errors]
[ocr errors]


he invention of a totally discrete model for natural phenomena was made by Ulam and von Neumann in the early fifties and was developed to the extent possible at the time. A few years earlier von Neumann had designed the architecture for the first serial digital computers containing stored programs and capable of making internal decisions. These machines are built of electronic logic devices that understand only packets of binary bits. Hierarchies of stored translators arrange them into virtual devices that can do ordinary or radix arithmetic at high speed. By transcribing continuum equations into discrete form, using finite difference techniques and their variants, serial digital computers can solve complex mathematical systems such as partial differential equations. Since most physical systems


The lattice gas automaton is an approach
to computing fluid dynamics that is still
in its infancy. In this three-part article
one of the inventors of the model presents
its theoretical foundations and its promise
as a general approach to solving partial
differential equations and to parallel com-
puting. Readers less theoretically inclined
might begin by reading "Calculations Us-
ing Lattice Gas Techniques" at the end
of Part II. This sidebar offers a summary
of the model's advantages and limitations
and a graphic display of two- and three-
dimensional lattice gas simulations.

with large numbers of degrees of freedom
can be described by such equations, se-
rial digital machines equipped with large

memories have become the standard way to simulate such phenomena.

As the architecture of serial machines developed, it became clear to both Ulam and von Neumann that such machines were not the most natural or powerful way to solve many problems. They were especially influenced by biological examples. Biological systems appear to perform computational tasks using methods that avoid both arithmetical operations and discrete approximations to continuous systems.

Though motivated by the complex information processing of biological systems, Ulam and von Neumann did not study how such systems actually solve tasks. Biological processes have been operating in hostile environments for a long time, finding the most efficient and often devious way to do something, a

way that is also resistant to disturbance by noise. The crucial principles of their operation are hidden by the evolutionary process. Instead, von Neumann chose the task of simulating on a computer the least complex discrete system capable of self-reproduction. It was Ulam who suggested an abstract setting for this problem and many other totally discrete models, namely, the idea of cellular spaces. The reasoning went roughly like this.

The question is simple: Find a minimal logic structure and devise a dynamics for it that is powerful enough to simulate complex systems. Break this up into a series of sharper and more elementary pictures. We begin by setting up a collection of very simple finite-state machines with, for simplicity, binary values. Connect them so that given a state for each of them, the next state of each machine depends only on its immediate environment. In other words, the state of any machine will depend only on the states of machines in some small neighborhood around it. This builds in the constraint that we only want to consider local dynamics.

We will need rules to define how states combine in a neighborhood to uniquely fix the state of every machine, but these can be quite simple. The natural space on which to put all this is a lattice, with elementary, few-bit, finite-state machines placed at the vertices. The rules for updating this array of small machines can be done concurrently in one clock step, that is, in parallel.

One can imagine such an abstract machine in operation by thinking of a fishnet made of wires. The fishnet has some regular connection geometry, and there are lights at the nodes of the net. Each light can be on or off. Draw a disk around each node of the fishnet, and let it have a 1-node radius. On a square net there are four lights on the edge of each disk, on a triangular net six lights (Fig. 1). The next state of the light at the center of the disk depends on the state of the lights on


Fig. 1. Two examples of fishnets made of wires with lights at the nodes. The lights are either on or off. In each example a disk with a radius of 1 node is drawn around one of the lights. The next state of the light at the center depends on the states of the lights on the edge of the disk and on nothing else. Thus these are examples of nearest-neighbor-connected cellular spaces.

the edge of the disk and on nothing else. Imagine all the disks in the fishnet asking their neighbors for their state at the same time and switching states according to a definite rule. At the next tick of an abstract clock, the pattern of lights on the fishnet would in general look different. This is what Ulam and von Neumann called a nearest-neighbor-connected cellular space. It is the simplest case of a parallel computing space. You can also see that it can be imaged directly in hardware, so it is also the architecture for a physical parallel computing machine.

We have not shown that such a device can compute. At worst, it is an elaborate light display. Whether or not such a cellular space can compute depends on

the definition of computation. The short answer is that special cases of fishnets are provably universal computers in the standard Turing machine sense; that is, they can simulate the architecture of any other sequential machine.

But there are other interpretations of computation that lie closer to the idea of simulation. For any given mathematical situation, we want to find the minimum cellular space that can do a simulation of it: At what degree of complexity does repeated iteration of the space, on which are coded both data and a solution algorithm, possess the power to come close to the solution of a complex problem? This depends on the complexity or degrees of freedom present in the problem.

An extreme case of complexity is physical systems with many degrees of freedom. These systems are ordinarily described by field theories in a continuum for which the equations of motion are highly nonlinear partial differential equations. Fluid dynamics is an example, and we will use it as a theoretical paradigm for many "large" physical systems. Because of the high degree of nonlinearity, analytic solutions to the field equations for such systems are known only in special cases. The standard way to study such models is either to perform experiments or simulate them on computers of the usual digital type.

Suppose a cellular space existed that evolved to a solution of a fluid system with given boundary conditions. Suppose also that we ask for the simplest possible such space that captured at least the qualitative and topological aspects of a solution. Later, one can worry about spaces that agree quantitatively with ordinary simulations. The problem is threefold: Find the least complex set of rules for updating the space; the simplest geometry for a neighborhood; and a method of analysis for the collective modes and time evolution of such a system.

At first sight, modeling the dynamics of large systems by cellular spaces seems far

[merged small][ocr errors][ocr errors]
[ocr errors]

too difficult to attempt. The general problem of a so-called "inverse compiler"given a partial differential system, find the rules and interconnection geometry that give a solution-would probably use up a non-polynomial function of computing resources and so be impractical if not impossible. Nevertheless cellular spaces have been actively studied in recent years. Their modern name is cellular automata, and specific instances of them have simulated interesting nonlinear systems. But until recently there was no example of a cellular automaton that simulated a large physical system, even in a rough, qualitative way.

Knowing that special cases of cellular automata are capable of arbitrarily complex behavior is encouraging, but not very useful to a physicist. The important phenomenon in large physical systems is not arbitrarily complex behavior, but the collective motion that develops as the system evolves, typically with a characteristic size of many elementary length scales. The problem is to simulate such phenomena and, by using simulations, to try to understand the origins of collective behavior from as many points of view as possible. Fluid dynamics is filled with examples of collective behavior-shocks, instabilities, vortices, vortex streets, vortex sheets, turbulence, to list a few. Any deterministic cellular-automaton model that attempts to describe non-equilibrium fluid dynamics must contain in it an iterative mechanism for developing collective motion. Knowing this and using some very basic physics, we will construct a cellular automaton with the appropriate geometry and updating rules for fluid behavior. It will also be the simplest such model. The methods we use to do this are very conservative from the viewpoint of recent work on cellular automata, but rather drastic compared to the approaches of standard mathematical physics. Presently there is a large gap between these two viewpoints. The sim

ulation of fluid dynamics by cellular automata shows that there are other complementary and powerful ways to model phenomena that would normally be the exclusive domain of partial differential equations.

The Example of Fluid Dynamics

Fluid dynamics is an especially good large system for a cellular automaton formulation because there are two rich and complementary ways to picture fluid motion. The kinetic picture (many simple atomic elements colliding rapidly with simple interactions) coincides with our intuitive picture of dynamics on a cellular space. Later we will exploit this analogy to construct a discrete model.

The other and older way of approaching flow phenomena is through the partial differential equations that describe collective motions in dissipative fluids-the Navier-Stokes equations. These can be derived without any reference to an underlying atomic picture. The derivation relies on the idea of the continuum; it is simpler to grasp than the kinetic picture and mathematically cleaner. Because the continuum argument leads to the correct functional form of the Navier-Stokes equations, we spend some time describing why it works. The continuum view of fluids will be called "coming down from above," and the microphysical view "coming up from below" (Fig. 2). In the intersection of these two very different descriptions, we can trap the essential elements of a cellular-automaton model that leads to the Navier-Stokes equations. Through this review we wish to show that cellular automaton models are a natural and evolutionary idea and not an invention come upon by accident.

Coming down from AboveThe Continuum Description

The notion of a smooth flow of some quantity arises naturally from a contin

uum description. A flow has physical conservation laws built-in, at least conservation of mass and momentum. With a few additional remarks one can include conservation of energy. The basic strategy for deriving the Euler and NavierStokes equations of fluid dynamics is to imbed these conservation laws into statements about special cases of the generalized Stokes theorem. We use the usual Gauss and Stokes theorems, depending on dimension, and apply them to small surfaces and volumes that are still large enough to ignore an underlying microworld. The equations of fluid dynamics are derived with no reference to a ball-bearing picture of an underlying atomic world, but only with a serene reliance on the idea of a smooth flow in a continuum with some of Newton's laws added to connect to the observed world. As a model (for it is not a theory), the Navier-Stokes equations are a good example of how concepts derived from the intuition of daily experience can be remarkably successful in building effective phenomenological models of very complex phenomena. It is useful to go through the continuum derivation of the Euler and Navier-Stokes equations presented in "The Continuum Argument" for several reasons: First, the reasoning is short and clear; second, the concepts introduced such as the momentum flux tensor, will appear pervasively when we pass to discrete theories of fluids; third, we learn how few ingredients are really necessary to build a fluid model and so mark out that which is essential-the role of conservation laws.

It is clear from its derivation that the Euler equation describing inviscid flows is essentially a geometrical equation. The extension to the full Navier-Stokes equations, for flows with dissipation, contains only a minimal reference to an underlying fluid microphysics, through the stress-rate of strain relation in the momentum stress tensor. So we see that continuum reasoning alone leads to nonlinear partial differ


ential equations for large-scale physical observables that are a phenomenological description of fluid flow. This description is experimentally quite accurate but theoretically incomplete. The coupling constants that determine the strength of the nonlinear terms-that is, the transport coefficients such as viscosity-have a direct physical interpretation in a microworld picture. In the continuum approach however, these must be measured and put in as data from the outside world. If we do not use some microscopic model for the fluid, the transport coefficients cannot be derived from first principles.

Solution Techniques-The Creation of a Microworld. The Navier-Stokes equations are highly nonlinear; this is prototypical of field-theoretical descriptions of large physical systems. The nonlinearity allows analytic solutions only for special cases and, in general, forces one to solve the system by approximation techniques. Invariably these are some form of perturbation methods in whatever small parameters can be devised. Since there is no systematic way of applying perturbation theory to highly nonlinear partial differential systems, the analysis of the Navier-Stokes equations has been, and still remains, a patchwork of ingenious techniques that are designed to cover special parameter regimes and limited geometries.

After an approximation method is chosen, the next step toward a solution is to discretize the approximate equations in a form suitable for use on a digital computer. This discretization is equivalent to introducing an artificial microworld. Its particular form is fixed by mathematical considerations of elegance and efficiency applied to simple arithmetic operations and the particular architecture of available machines. So, even if we adopt the view that the molecular kinetics of a fluid is unimportant for describing the general features of many fluid phenomena, we are nevertheless forced to describe the sys

[ocr errors]
[ocr errors]
[ocr errors]

Fig. 2. Both the continuum view of fluids and the atomic picture lead to the NavierStokes equations but not without approxima

tem by a microworld with a particular microkinetics. The idea of a partial differential equation as a physical model is tied directly to finding an analytic solution and is not particularly suited to machine computation. In a sense, the geometrically motivated continuum picture is only a clever and convenient way of encoding conservation laws into spaces with which we are comfortable.

[merged small][merged small][merged small][merged small][ocr errors]
« PreviousContinue »