Each of the fifteen sets consisted of 39 combinations of slight variations approaching the actual widths of each of the four internal zones. Specifically, values of .195, .200 and .205 were used for the core's width (that is, where ".195" for example means ".195 proportionately of the distance from the center of the earth to the surface, falling at the boundary of the core with the outer core"); .335, .340, .345, .350 and .355 for the outer core; .4430, .4435, .4450, .4465, .4480, .4485, .4500, .4515, .4530, .4535, .4550, .4565, .4580 and .4585 for the mantle, and .0020, .0035, .0050 and .0065 for the crust/atmosphere (the relatively large range for the crust/atmosphere was to take into account the possibility of crust alone, crust + water, or crust + water + atmosphere). So, and for example, one of the zonations consisted of zones whose widths were .195, .355, .4435 and .0065 for the core, outer core, mantle, and crust/atmosphere, respectively. Once the zone limits were set and sampling grid adjusted, class memberships of sampled points were established as before, and spatial autocorrelation measures applied as before. The resulting matrices again served as the input to the double-standardization operation. The results were most instructive. For both of the two spatial autocorrelation constructions of interest here, all 39 variations on the zones passed the test, across all five each of the imposed grid shifts and coarseness variations. This in itself is not all that exciting, since we already know from the previous set of simulations that many or most such zonations will pass the test of transformation into an unambiguous three-dimensional space. However, here is where we can begin to use the mean input matrix correlation statistics discussed earlier to good advantage. I already know from both of the first simulations reported here and the later ones that low mean r values--of say, .01 or less--are quite rare for the universe of matrices that may be input to double-standardization. For example, if one looks at the entire 25000 matrices of fully random numbers that were input in the first simulation reported, the mean r across all these instances is about .176 (with a standard deviation of about .113). In fact (and based on some other data), of those 25000 matrices, only about 6 have a mean r of as low as .00075, and only about 155 have a mean r of as low as .0075. This is just a matter of chance, as the vectors created by the randomly imported values will usually have some non-miniscule amount of correlation with one another--both individually, and in sum. As mentioned earlier, I would like to consider the possibility that such instances of internal correlation may be related to conditions of redundancy of function within a system, and therefore degree of disequilibrium in its operation. With respect to its throughput processing of energy, the earth is almost certainly in a state of dynamic equilibrium or very close to it; this is indicated if nothing else by the relative lack of trends in presence in its main surface elements, especially those comprising the oceans and atmosphere, over the past three hundred million years or so (yes, there have been some significant temporary ups and downs in, for example, oxygen, but no general trend over the whole of the period). If the model I have developed holds any water, therefore, one would expect that zonal models of the earth such as the one I have described should reveal this in the form of a summary r statistic approaching zero in value. What we actually get is something close to this, but a bit different: for one spatial autocorrelation measure, the mean r values that come up for five sets of analyses are: .003, .004, .004, .003, and .004; for the other, .034, .038, .038, .036, and .039. One suspects here, first of all, that one of the two measures is simply not as mathematically valid as the other; even with this assumption, however, there is still the matter of sampling error to consider. We may simply be at the limit of what this fine-grained a sample can identify in this instance; there are at least two reasons to think otherwise, however. First,
as you will notice, the first five values are fairly consistent. This
suggests that sampling error Second,
and more importantly, while it is true that only one of the thirty-nine
configurations of zonal These
facts present a good number of opportunities for model validation, some
of which are taken up next. First, it can be predicted that a re-sampling
employing, say, 200000 sample points, will not produce much of a further
reduction of those .003 and .004 mean r values, even if the best estimates
of an "average" zone width of the crust/atmosphere are used. Second, it
can be predicted that some small, but apparent, improvements in reducing
those .003 and .004 mean r values _________________________
Copyright 2006-2014 by Charles H. Smith.
All rights reserved. Feedback: charles.smith@wku.edu |