This type of generator has the interesting (or useful, or disastrous) property, illustrated by Seq. 1, that the least significant bit always has the alternating pattern 101010.... Further, the next bit has a pattern with period 4 (0110 above), the third bit has period 8, and so forth. Ultimately, the most significant bit has period M, which becomes the period of the sequence itself. Our example uses a short 5-bit word, which generates a sequence with a period of only 32. It is not unusual in many computer applications, however, to use many more bits (for example, to use a 32-bit word to generate a sequence with period M = 232). One must be careful not to use such sequences in a problem with structures having powers of 2 in their dimensions. For example, a sequence with period 232 would be a poor choice if the problem involved, say, a 3-dimensional lattice with sides of 128 (= 27) because the structure of the sequence can then interact unfavorably with the structure of the problem. Furthermore, there would be only 232/(27)3 = 2048 possible states. The usual assumption in Monte Carlo computations is that one has used a "representative" sample of the total number of possible computations—a condition that is certainly not true for this example. One method of improving a pseudo-random-number generator is to combine two or more unrelated generators. The length of the hybrid will be the least common multiple of the lengths of the constituent sequences. For example, we can use the theory of normal numbers to construct a sequence that has all the statistical features of a "truly random" sequence and then combine it with a linear-congruential sequence. This technique yields a hybrid possessing the strengths of both sequences-for example, one that retains the statistical features of the normal-number sequence. We first construct a normal number, that is, a number in base b for which each block of K digits has limiting frequency (1/b). A simple example in base 2 can be constructed by concatenating the sequence of integers it becomes a sequence of numbers in base 2 that satisfy all linear statistical conditions for randomness. For example, the frequency of a specific 5-bit number is (1/2)5. Sequences of this type do not "appear" random when examined; it is easy for a person to guess the rule of formation. However, we can further disguise the sequence by combining it with the linear-congruence sequence generated earlier (Seq. 1). We do this by performing an exclusive-or (XOR) operation on the two sequences: Of course, if Seq. 3 is carried out to many places, a pattern in it will also become apparent. To eliminate the new pattern, the sequence can be XOR'ed with a third pseudo-random sequence of another type, and so on. This type of hybrid sequence is easy to generate on a binary computer. Although for most computations one does not have to go to such pains, the technique is especially attractive for constructing "canonical" generators of apparently random numbers. A key idea here is to take the notion of randomness to mean simply that the sequence can pass a given set of statistical tests. In a sequence based on normal numbers, each term will depend nonlinearly on the previous terms. As a result, there are nonlinear statistical tests that can show the sequence not to be random. In particular, a test based on the transformations used to construct the sequence itself will fail. But, the sequence will pass all linear statistical tests, and, on that level, it can be considered to be random. What types of linear statistical tests are applied to pseudo-random numbers? Traditionally, sequences are tested for uniformity of distribution of single elements, pairs, triples, and so forth. Other tests may be performed depending on the type of problem for which the sequence will be used. For example, just as the correlation between two sequences can be tested, the auto-correlation of a single sequence can be tested after displacing the original sequence by various amounts. Or the number of different types of "runs" can be checked against the known statistics for runs. An increasing run, for example, consists of a sequential string of increasing numbers from the generator (such as, 0.08, 0.21, 0.55, 0.58, 0.73, ...). The waiting times for various events (such as the generation of a number in each of the five intervals (0, 0.2), (0.2, 0.4),.... (0.8, 1)) may be tallied and, again, checked against the known statistics for random-number sequences. If a generator of pseudo-random numbers passes these tests, it is deemed to be a "good" generator, otherwise it is "bad." Calling these criteria "tests of randomness" is misleading because one is testing a hypothesis known to be false. The usefulness of the tests lies in their similarity to the problems that need to be solved using the stream of pseudo-random numbers. If the generator fails one of the simple tests, it will surely not perform reliably for the real problem. (Passing all such tests may not, however, be enough to make a generator work for a given problem, but it makes the programmers setting up the generator feel better.) Quasi-Random Numbers For some applications, such as evaluating integrals numerically, the use of quasirandom sequences is much more efficient than the use of either random or pseudorandom sequences. Although quasi-random sequences do not necessarily mimic a random sequence, they can be tailored to satisfy the equi-distribution criteria needed for the integration. By this I mean, roughly speaking, that the numbers are spread throughout the region of interest in a much more uniform manner than a random or pseudo-random sequence. For example, say one needs to find the average of the quantity f(x) over the set of coordinates x, knowing the distribution of coordinate values p(x) for the system being considered. Ordinarily, the average is given by the expression Rather than evaluating this integral, however, one can evaluate f(x) at a series of random points. If the probability of picking a particular point x is proportional to the statistical weight p(x), then (f) is given by the expression N (f) = Σƒ(x;)/N, i=1 where N is the total number of points chosen. This idea is the basis of the Metropolis technique of evaluating integrals by the Monte Carlo method. Now if the points are taken from a random or a psuedo-random sequence, the statistical uncertainty will be proportional to 1/√N. However, if a quasi-random sequence is used, the points will occupy the coordinate space with the correct distribution but in a more uniform manner, and the statistical uncertainty will be proportional to 1/N. In other words, the uncertainty will decrease much faster with a quasi-random sequence than with a random or pseudo-random sequence. How are quasi-random sequences generated? One type of sequence with a very uniform distribution is based on the radical-inverse function. The radical-inverse function (N, b) of a number N with base b is constructed by 1. writing the number in base b (for example, 14 in base 3 is 112); 2. reversing the digits (112 becomes 211); and 3. writing the result as a fraction less than 1 in base b (211 becomes 211/1000 in base 3 and, thus, (14, 3) = .211). A sequence based on the radical-inverse function is generated by choosing a prime number as the base b and finding ø(1,b), ø(2, b), (3, b), (4, b), .... For a problem with k dimensions, the first k primes are used, and ((N, b1), (N, b2), . . . (N, bk)) becomes the Nth point of the k-dimensional sequence. This sequence has a very uniform distribution and is useful in mutiple integration or multi-dimensional sampling. There are many other types of random, pseudo-random, or quasi-random sequences than the ones I have discussed here, and there is much research aimed at generating sequences with the properties appropriate to the desired application. However, the examples I have discussed should illustrate both the approaches being taken and the obstacles that must be overcome in the quest of suitable "random" numbers. ■ Monte Carlo at Work by Gary D. Doolen and John Hendricks E very second nearly 10,000,000,000 “random” numbers are being generated on computers around the world for Monte Carlo solutions to problems that Stan Ulam first dreamed of solving forty years ago. A major industry now exists that has spawned hundreds of full-time careers invested in the fine art of generating Monte Carlo solutions—a livelihood that often consists of extracting an answer out of a noisy background. Here we focus on two of the extensively used Monte Carlo solvers: MCNP, an internationally used neutron and photon transport code maintained at Los Alamos; and the "Metropolis" method, a popular and efficient procedure for computing equilibrium properties of solids, liquids, gases, and plasmas. MCNP In the fifties, shortly after the work on the Monte Carlo method by Ulam, von Neumann, Fermi, Metropolis, Richtmyer, and others, a series of Monte Carlo transport codes began emerging from Los Alamos. The concepts on which these codes were based were those outlined by von Neumann (see "Stan Ulam, John von Neumann, and the Monte Carlo Method"), but a great deal of detailed work was needed to incorporate the appropriate physics and to develop shorter routes to statistically valid solutions. From the beginning the neutron transport codes used a general treatment of the geometry, but successive versions added such features as cross-section libraries, variancereduction techniques (essentially clever ways to bias the random numbers so that the guesses will cluster around the correct solution), and a free-gas model treating thermalization of the energetic fission neutrons. Also, various photon transport codes were developed that dealt with photon energies from as low as 1 kilo-electron-volt to the high energies of gamma rays. Then, in 1973, the neutron transport and the photon transport codes were merged into one. In 1977 the first version of MCNP appeared in which photon cross sections were added to account for production of gamma rays by neutron interactions. Since then the code has been distributed to over two hundred institutions worldwide.* The Monte Carlo techniques and data now in the MCNP code represent over three hundred person-years of effort and have been used to calculate many tens of thousands of practical problems by scientists throughout the world. The types of problems include the design of nuclear reactors and nuclear safeguard systems, criticality analyses, oil well logging, health-physics problems, determinations of radiological doses, spacecraft radiation modeling, and radiation damage studies. Research on magnetic fusion has used MCNP heavily. The MCNP code features a general three-dimensional geometry, continuous energy or multigroup physics packages, and sophisticated variance reduction techniques. Even very complex geometry and particle transport can be modeled almost exactly. In fact, the complexity of the geometry that can be represented is limited only by the dedication of the user. *The MCNP code and manual can be obtained from the Radiation Shielding Information Center (RSIC), The Metropolis Method The problem of finding the energy and configuration of the lowest energy state of a system of many particles is conceptually simple. One calculates the energy of the system, randomly moves each particle a small distance, and recalculates the energy. If the energy has decreased, the new position is accepted, and the procedure continues until the energy no longer changes. The question of how to calculate equilibrium properties of a finite system at a given temperature is more difficult, but it was answered in a 1953 Journal of Chemical Physics article by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller, who decided that the calculation should follow the same steps for finding the minimum energy but with one important change. When a move results in an increased energy, one accepts the new position with probability e-AE/T, where AE is the change in energy and T is the temperature. This procedure gives the equilibrium solution for any physical system. In fact, a system with many particles can be solved with only a few lines of code and a fast computer. Although calculations for short-range forces are much easier than for long-range forces (such as the Coulomb force), the Metropolis technique has been used for most physical systems in which the forces between particles are known. Wayne Slattery, Hugh DeWitt, and one of the authors (GD) applied the technique to a neutral Coulomb plasma consisting of thousands of particles in a periodic box. The purpose was to calculate such physical properties as the temperature at which this type of plasma freezes and the pair distribution function, which is the probability of finding one particle at a given distance from another (see accompanying figure). Because the uncertainty in a Monte Carlo result is proportional to 1/√N, where N is the number of moves of a single particle, several million moves requiring several hundred Cray hours were needed to obtain accurate results for the plasma at many temperatures. As computers become faster and their memories increase, larger and more complicated systems are being calculated far more accurately than even Stan Ulam probably expected.■ PAIR-DISTRIBUTION FUNCTION This plot gives the probability of pairs of charged particles in a plasma being separated by a certain distance. The probabilities are plotted as a function of the distance between the pair of particles (increasing from left to right) and temperature (decreasing from front to back). At the left edge, both the distance and the probability are zero; at the right edge, the probability has become constant in value. Red indicates probabilities below this constant value, yellow and green above. As the temperature of the plasma decreases, lattice-like peaks begin to form in the pairdistribution function. The probabilites, generated with the Metropolis method described in the text, have been used for precise tests of many theoretical approximations for plasma models. |