« PreviousContinue »
and is based on the idea of self-similarity or scaling.
Let's take the simplest example, the unit square. We could say that the dimension of the unit square is 2 for the following reason. Consider any scaling transformation f(x) = \r, where x is a point in the plane. The transformation f is called a similarity map of the plane and the image of the unit square under f will be a square whose area is 12. The power to which we raise the scaling exponent to obtain the measure of the image set is the dimension of the original set. Exactly the same reasoning shows that the unit cube in Euclidean n space has dimension n.
The generalization to more complicated metric spaces is straightforward. Consider a general metric space X. A map f is a similarity map of a subset E of X if the distance between points in E scale by a factor r under the action of the map. In other words there is a number r such that for all x and y in E, dist(f(x), f (y)) = r dist(x, y). Hausdorff defined for each number ß > 0 a measure Hon X that obeys the scaling law of Hausdorff measures. Scaling law of Hausdorff measures: If E CX and f is a similarity map of E onto f(E) with similarity ratio r, then H (f (E)) = r3H (E).
While the measures HB are defined on the metric space for all values of B > 0, Hausdorff showed that there is one and only one measure Ho for which a “jump" occurs. He called a the dimension of the metric space. Hausdorff dimension theorem: For each metric space X, there is a number a such that if ß < a, then H (X) = ao and if a < B, then H(X) = 0. The number a is called the Hausdorff dimension of X.
How do Hausdorff's definitions of measure and dimension compare with our ordinary notions in Euclidean space? It turns out that the Hausdorff dimension of n-dimensional Euclidean space is n (which it should be, of course) and the associated Hausdorff measure H" is the same as our usual definition of volume element. Thus, Ho is a natural generalization to a space of dimension a of our ordinary notions of measure, or volume element, in Euclidean space. Once the Hausdorff dimension a of a space is known, we have a natural measure on the space, namely Ho So the first problem is to determine the dimension of the space under consideration.
Hausdorff Dimension of Cantor's Middle-Third Set. As an example, we will show that the self-similarity properties of the middle-third Cantor set C define its Hausdorff dimension as log 2/ log 3. (In fact, Hausdorff proved this in his original paper.)
Consider the two similarity maps fi(x) = x/3 and f2(x) = x/3 + 2/3. Then fi(C) = C n (0,1/3] and f2(C) = C n [2/3, 1). So C = fı(C) Uf2(C). Since fi(C) and f2(C) are disjoint and Ho is a measure,
H°(C) = Hot (C )) + Hoa(C )).
By the scaling law, Ho (fi(C)) (1/3)°H °(C) and H° (f2(C)) = (1/3)°H°C). Therefore
Ho(C)= (1/3)°H °(C)+(1/3)°H °(C)= (2/3°)Ho(C).
Cancelling Ho(C), we have
Los Alamos Science Special Issue 1987
1 = 2/3, or a = log2/ log 3.
We conclude that the Hausdorff dimension of C is log 2/ log 3. Of course, this is only a heuristic argument (because we cannot cancel Ho(C) unless H °(C) is positive and finite), but it can be justified.
Returning to our example T(x) = (3/2)(1 – 12x – 11), we have shown that the invariant set M is Cantor's middle-third set and that the Hausdorff dimension of M is a = log2/ log 3. In fact u = Ho, Hausdorff's volume element in dimension a, is an invariant measure on M.
Our analysis of this example is typical of the analyses of many discrete dynamical systems. We found an invariant set M that is constructed by an algorithm that analyzes the behavior of points near M. The first application of the algorithm yields nonoverlapping closed regions J1, ...,Jn the second yields nonoverlapping subregions Jin, ...,Jin in each Ji, and so forth. Finally, the invariant set M is realized as
In this example the construction is self-similiar; that is, there are scaling ratios 11, ..., in such that a region at iteration k, Ji... it and a subregion at level k + 1, Ji.... ik+, are geometrically similar with ratio timur. (In our example t1 = 12 = 1/3.) When such similarity ratios exist, one can use a fundamental formula due to P. A. P. Moran for calculating the Hausdorff dimension of the invariant set.
= a, where a is the solution of 1+
1. Moreover, 0 < H^(M) < +0. That is, a is the Hausdorff dimension of M, and Ho is a well-defined finite measure on M.
Random Cantor Sets. One of my current interests centers on analyzing the invariant sets obtained when the dynamical system experiences some sort of random perturbation. The perturbation introduces a perturbation in the algorithm used to construct the invariant set. Thus we randomize the algorithm, and the scaling ratios 11,12, ..., In, instead of having fixed or deterministic values as before, are now random variables that have a certain probability distribution. One theorem of Williams and mine (Mauldin and Williams 1986) is that the Hausdorff dimension of the final “perturbed” set M is, with probability 1, the solution of
where E (1.74 + tą. +...) is the expected value of the sum of the ath powers of the scaling ratios. Note that this formula reduces to Moran's formula in the deterministic case.
As an example suppose our randomly perturbed system produces Cantor subsets of [0,1] as follows. First, choose x at random according to the uniform distribution on [0,1). Then between x and 1 choose y at random according to the uniform distribution on [x, 1). We obtain two intervals J1 = [0, x) and 12 = [y, 1). Now in each of these intervals repeat the same procedure (independently in each interval). We obtain two
THE GOLDEN MEAN
Fig. 9. (a) Consider a rectangle with sides of length A and B, A < B. Let r denote the ratio of A to B. Divide this rectangle into a square of side A and a new rectangle. If the ratio of the lengths of the sides of the new rectangle, (B – A)/ A, also equals r, then both the original rectangle and the new rectangle are golden rectangles and r is equal to the golden mean m. (The numerical value of m, (V5 – 1)/2, is obtained by solving the two simultaneous equations r = A/B and r = (B- A)/ A.) (b) The process of dividing a golden rectangle into a square and a new golden rectangle can, of course, be continued indefinitely. It can be shown that the logarithmic spiral given in polar coordinates by log p = me passes through two opposite vertices of each successively smaller square. This fact may help explain why the Hausdorft dimension of the random Cantor sets described in the text is equal to the golden mean.
A problem left for the reader: Why should the golden mean (Fig. 9) arise as the dimension of these randomly constructed Cantor sets?
Problem 3. Computer Experiments and Random Homeomorphisms
One topic Stan and I discussed several times was whether one could “randomize" dynamical systems in some way. Is it possible to define a probability measure on a wide class of dynamical systems such that meaningful statements could be made, for instance, about the probability that a system would become turbulent or about the expected time to the onset of chaos"? To get started on this very ambitious problem, we discussed how we would go about generating homeomorphisms at random. For simplicity, let us generate homeomorphisms of the unit interval [0,1] onto itself. Thus, we wish to build continuous, strictly increasing maps h with h(0) = 0 and h(1) = 1. One algorithm for doing this randomly follows.
Set h(0) = 0 and h(1) = 1. Choose h(1/2) according to the uniform distribution on (0,1). Continue by choosing h(1/4) and h(3/4) according to the uniform distribution on (0,1/2] and [1/2, 1), respectively. In general, once the values of h(i/2") have been determined for i = 0, 1, ..., 2", choose h ((2i + 1)/2n+1) according to the uniform distribution on (h(i/2"), h(i + 1)/2"]. This simple algorithm is easily implemented on a computer. (It needs no more than fifty lines of FORTRAN.) If the computer's randomnumber generator is fairly good, general properties of these functions can be guessed. However, to show that this algorithm defines an associated probability measure P on 12, the set of all homeomorphisms of [0,1] onto [0,1], is no small task. First we need to define a class of elementary events and the probabilities associated with them. An elementary event in the sample space N comes naturally from the random algorithm. For a positive integer n, consider the dyadic grid on (0,1) given by the points 1/2", 2/2", ...,(2" – 1)/2". Over each grid point i/2" construct a “gate”, an interval (ai, b;) such that a; < bi s ai+1. An elementary event consists of all elements h of 12 that pass through all the gates: a; <h(i/2") < bi, for i = 1,2,..., 2" – 1 (Fig. 10).
The probability assigned to an elementary event is defined by induction on n. For example, if n = 1, an elementary event consists of all h that pass through a single gate: a < h(1/2) < b. Since the random algorithm chooses h(1/2) uniformly, the probability assigned to this event is the length of the interval, b - a. If n > 1, the probability of an elementary event is determined from the conditional probabilities given by the algorithm. For example, the distribution function of the random variable h(3/4) is P (h(3/4) < 1). To calculate this distribution function, we first find the
conditional probability that h(3/4) <t, given that h(1/2) = s. It follows directly from the construction algorithm that
if I <s.
Fig. 10. In the study of random homeomorphisms described in the text, an elementary event is defined as the set of all homeomorphisms h that pass through 2" – 1 "gates" consisting of open intervals (aj, bi) over the grid points 1/2" (i = 1, 2, ..., 2" – 1). The a;'s and bi's are restricted by the conditions aj < bi < 2j-1. Shown here is one possible set of gates for n = 2 and a member of the corresponding elementary event.
P (h(3/4) <t\h(1/2) = s) ds
satisfy Kolmogorov's consistency theorem. We have shown that these conditions are indeed satisfied and therefore a probability measure P is defined on the homeomorphisms of (0,1). To see what these homeomorphisms look like, we used the computer. Figure 12 shows a few samples from our computer studies in which the values of h(i/2") are computed for n = 10.
S. Graf, S. C. Williams, and I studied this method in detail (Graf, Mauldin, and Williams 1986). For example, we examined a large number of the computer studies and guessed that with probability 1 the derivative of a random homeomorphism at the origin is 0. This conjecture turned out to be correct. The argument is essentially the following. First, since h is increasing and h(0) = 0, it is enough to show that
Fig. 11. As demonstrated in the text, F(t) = P(n(3/4) < 1) equals 0 for i < O and equals 1 + (1 - 1) In(1 – 1) for 1 > 0. Shown here is the graph of that distribution function.
h(1/2") Un(h) =