Page images

and is based on the idea of self-similarity or scaling.

Let's take the simplest example, the unit square. We could say that the dimension of the unit square is 2 for the following reason. Consider any scaling transformation f(x) = Ax, where x is a point in the plane. The transformation f is called a similarity map of the plane and the image of the unit square under ƒ will be square whose area is X2. The power to which we raise the scaling exponent to obtain the measure of the image set is the dimension of the original set. Exactly the same reasoning shows that the unit cube in Euclidean n space has dimension n.

The generalization to more complicated metric spaces is straightforward. Consider a general metric space X. A map f is a similarity map of a subset E of X if the distance between points in E scale by a factor r under the action of the map. In other words there is a number r such that for all x and y in E, dist (f(x),f(y)) = r dist(x, y). Hausdorff defined for each number ẞ > 0 a measure HB on X that obeys the scaling law of Hausdorff measures.

Scaling law of Hausdorff measures: If ECX and ƒ is a similarity map of E onto ƒ(E) with similarity ratio r, then H3 (f(E)) = r3H3(E).

While the measures H are defined on the metric space for all values of ẞ > 0, Hausdorff showed that there is one and only one measure Ha for which a "jump” occurs. He called a the dimension of the metric space.

Hausdorff dimension theorem: For each metric space X, there is a number a such that if <a, then H3(X) = ∞ and if a < ß, then HB(X) = 0. The number a is called the Hausdorff dimension of X.

How do Hausdorff's definitions of measure and dimension compare with our ordinary notions in Euclidean space? It turns out that the Hausdorff dimension of n-dimensional Euclidean space is n (which it should be, of course) and the associated Hausdorff measure H" is the same as our usual definition of volume element. Thus, Ha is a natural generalization to a space of dimension a of our ordinary notions of measure, or volume element, in Euclidean space. Once the Hausdorff dimension a of a space is known, we have a natural measure on the space, namely H. So the first problem is to determine the dimension of the space under consideration.

Hausdorff Dimension of Cantor's Middle-Third Set. As an example, we will show that the self-similarity properties of the middle-third Cantor set C define its Hausdorff dimension as log 2/log 3. (In fact, Hausdorff proved this in his original paper.)

Consider the two similarity maps f(x) = x/3 and f2(x) = x/3+2/3. Then fi(C) = C [0, 1/3] and ƒ2(C) = Cn [2/3, 1]. So C =ƒ1(C) Uƒ2(C). Since fi(C) and f2(C) are disjoint and H is a measure,

H°(C) = H°(f(C)) + Ha (f2(C)).

By the scaling law, H°(f(C)) = (1/3)°H°(C) and H°(f2(C)) = (1/3)aH°(C).


H°(C) = (1/3)°H°(C) + (1/3)aH°(C) = (2/3°)H°(C).

Cancelling H°(C), we have

Los Alamos Science Special Issue 1987

[ocr errors][ocr errors][ocr errors]

1 = 2/3°, or a = log 2/log 3.

We conclude that the Hausdorff dimension of C is log 2/log 3. Of course, this is only a heuristic argument (because we cannot cancel H°(C) unless H°(C) is positive and finite), but it can be justified.

Returning to our example T(x) = (3/2)(1 − 12x 1), we have shown that the invariant set M is Cantor's middle-third set and that the Hausdorff dimension of M is a = log 2/log 3. In fact μ = Ha, Hausdorff's volume element in dimension a, is an invariant measure on M.

Our analysis of this example is typical of the analyses of many discrete dynamical systems. We found an invariant set M that is constructed by an algorithm that analyzes the behavior of points near M. The first application of the algorithm yields nonoverlapping closed regions J1,...,Jn the second yields nonoverlapping subregions J¡1,..., Jin in each J, and so forth. Finally, the invariant set M is realized as

[ocr errors][merged small][merged small]

In this example the construction is self-similiar; that is, there are scaling ratios 11,..., such that a region at iteration k, J... and a subregion at level k + 1, Ji...ik+ are geometrically similar with ratio fi. (In our example t1 = t2 = 1/3.) When such 12 similarity ratios exist, one can use a fundamental formula due to P. A. P. Moran for calculating the Hausdorff dimension of the invariant set.

Theorem: If M = (Ui, <ni....), then dim(M) = a, where a is the solution of



ta +

= 1. Moreover, 0 < H°(M) < +∞.

+ ta That is, a is the Hausdorff dimension of M, and Ha is a well-defined finite measure on M.

[ocr errors]

Random Cantor Sets. One of my current interests centers on analyzing the invariant sets obtained when the dynamical system experiences some sort of random perturbation. The perturbation introduces a perturbation in the algorithm used to construct the invariant set. Thus we randomize the algorithm, and the scaling ratios 1, 2,...,n, instead of having fixed or deterministic values as before, are now random variables that have a certain probability distribution. One theorem of Williams and mine (Mauldin and Williams 1986) is that the Hausdorff dimension of the final "perturbed" set M is, with probability 1, the solution of

E (1o +

+ta) = 1,

where E(t+to+) is the expected value of the sum of the ath powers of the scaling ratios. Note that this formula reduces to Moran's formula in the deterministic case.

As an example suppose our randomly perturbed system produces Cantor subsets of [0,1] as follows. First, choose x at random according to the uniform distribution on [0,1]. Then between x and 1 choose y at random according to the uniform distribution on [x, 1]. We obtain two intervals J1 [0,x] and J2 = [y, 1]. Now in each of these intervals repeat the same procedure (independently in each interval). We obtain two





Fig. 9. (a) Consider a rectangle with sides of length A and B, A < B. Let r denote the ratio of A to B. Divide this rectangle into a square of side A and a new rectangle. If the ratio of the lengths of the sides of the new rectangle, (B — A)/A, also equals r, then both the original rectangle and the new rectangle are golden rectangles and r is equal to the golden mean m. (The numerical value of m, (√5 − 1)/2, is obtained by solving the two simultaneous equations r = A/B and r = (B-A)/A.) (b) The process of dividing a golden rectangle into a square and a new golden rectangle can, of course, be continued indefinitely. It can be shown that the logarithmic spiral given in polar coordinates by log p = me passes through two opposite vertices of each successively smaller square. This fact may help explain why the Hausdorff dimension of the random Cantor sets described in the text is equal to the golden mean.

[ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small]
[merged small][ocr errors][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small]

Problem 3. Computer Experiments and Random Homeomorphisms

One topic Stan and I discussed several times was whether one could “randomize" dynamical systems in some way. Is it possible to define a probability measure on a wide class of dynamical systems such that meaningful statements could be made, for instance, about the probability that a system would become turbulent or about the expected time to the "onset of chaos"? To get started on this very ambitious problem, we discussed how we would go about generating homeomorphisms at random. For simplicity, let us generate homeomorphisms of the unit interval [0,1] onto itself. Thus, we wish to build continuous, strictly increasing maps h with h(0) = 0 and h(1) = 1. One algorithm for doing this randomly follows.

Set h(0) = 0 and h(1) = 1. Choose h(1/2) according to the uniform distribution on [0,1]. Continue by choosing h(1/4) and h(3/4) according to the uniform distribution on [0, 1/2] and [1/2, 1], respectively. In general, once the values of h(i/2′′) have been determined for i = 0, 1,..., 2", choose h ((2i + 1)/2"+1) according to the uniform distribution on [h(i/2"), h(i + 1)/2′′]. This simple algorithm is easily implemented on a computer. (It needs no more than fifty lines of FORTRAN.) If the computer's randomnumber generator is fairly good, general properties of these functions can be guessed. However, to show that this algorithm defines an associated probability measure P on N, the set of all homeomorphisms of [0,1] onto [0,1], is no small task. First we need to define a class of elementary events and the probabilities associated with them. An elementary event in the sample space N comes naturally from the random algorithm. For a positive integer n, consider the dyadic grid on [0,1] given by the points 1/2", 2/2", ..., (2′′ – 1)/2". Over each grid point i/2" construct a "gate”, an interval (a,b;) such that a¡ < b; ≤ a;+1. An elementary event consists of all elements h of N that pass through all the gates: a¡ < h(i/2") < b;, for i = 1, 2, ..., 2′′ – 1 (Fig. 10).


The probability assigned to an elementary event is defined by induction on n. For example, if n = 1, an elementary event consists of all h that pass through a single gate: ah(1/2) < b. Since the random algorithm chooses h(1/2) uniformly, the probability assigned to this event is the length of the interval, ba. If n > 1, the probability of an elementary event is determined from the conditional probabilities given by the algorithm. For example, the distribution function of the random variable h(3/4) is P (h(3/4) ≤ t). To calculate this distribution function, we first find the

[ocr errors][merged small][merged small][merged small]

Second, set

P (h(3/4) ≤ 1) = [' P (h(3/4) ≤ t\h(1/2) = s) ds

= √ =

√(t − s)/(1 − s)ds

= t + (1 − t) ln(1 − t).

The distribution of h(3/4) is shown in Fig. 11.

The exact formulas for the probabilities assigned to various elementary events are quite complicated. What is required is to determine that probabilities of the form

P (h(1/2′′) ≤ t1, h(2/2′′) ≤ t2,..., h ((2′′ − 1)/2′′) ≤ t2′′ −1)

satisfy Kolmogorov's consistency theorem. We have shown that these conditions are indeed satisfied and therefore a probability measure P is defined on the homeomorphisms of [0,1]. To see what these homeomorphisms look like, we used the computer. Figure 12 shows a few samples from our computer studies in which the values of h(i/2′′) are computed for n = 10.

S. Graf, S. C. Williams, and I studied this method in detail (Graf, Mauldin, and Williams 1986). For example, we examined a large number of the computer studies and guessed that with probability 1 the derivative of a random homeomorphism at the origin is 0. This conjecture turned out to be correct. The argument is essentially the following. First, since h is increasing and h(0) = 0, it is enough to show that


{ "

if 1 <t
(t − s)/(1 − s) if s < t <1

[ocr errors]

P (h(3/4) ≤ t|h(1/2) = s) ds



¥n(h) =

[ocr errors]

= lim _2′′h(1/2′′) = 0.


where n = 1,2,3,.... It is intuitively clear and can be proved that V1, V2, V3,... are independent random variables, all uniformly distributed on [0,1]. Set X = In V,. The X,'s are independent and identically distributed, and E(X) = f ln t dt = −1. Therefore, by the strong law of large numbers,




lim (1/n) X, = -1.




Fig. 10. In the study of random homeomorphisms described in the text, an elementary event is defined as the set of all homeomorphisms h that pass through 2′′ – 1 "gates" consisting of open intervals (a, b) over the grid points 1/2" (i = 1, 2, 2" - 1). The a's and b,'s are restricted by the conditions aj < b < aj.1. Shown here is one possible set of gates for n = 2 and a member of the corresponding elementary event.


[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]
[merged small][merged small][ocr errors]
« PreviousContinue »