One of the appeals of this line of reasoning is that the structures it foresees as constituting physical, extended, space can be modeled in the abstract through the aid of simulations. I claim no extraordinary level of skill in this kind of work, but even so have been able to make some progress in this direction through two simple approaches. The first, and more straightforward, involves directly manipulating the actual n by n matrix of scores undergoing double-standardization. Prior to the latter operation one can, of course, feed whatever numbers one wishes to directly into each matrix, for whatever selective effect. The
second ends with the same outcome (the double-standardization of an n
by n matrix of values), but begins with the establishment of a limited
surface or volume metric: perhaps, for example, the surface or interior
of a sphere or spheroid. According to purpose, one then simply imposes
a group pattern upon/within the space established--specifically, a pattern
constituted of n classes, or zones, of fundamental elements. A sample
grid is then superimposed on/over the pattern, with n sets of class-affined
coordinate locations recognized and grouped. The overall relative spatial
position of each class with respect to each other is then calculated via
a spatial autocorrelation algorithm (in this case, the variety based on
sums of squares methods, not those applying additive contiguities). This
operation yields an index value for each n The second approach may be applied to an infinitely large number of geometries and class patterns, but one must be careful to avoid some potential pitfalls in attempting to relate such simulated results to real world systems, as will become more apparent as we proceed.
Before describing the sets of simulations I have carried out that employed the first approach described above, it is best to quickly review how the results are calculated and related to the object here. However the numbers are arrived at that are actually double-standardized, the latter operation will produce an n by n matrix of z scores. I have identified about a dozen distinguishable forms of results of these: for example, in some instances the iterative operation never produces a convergence to stable values, while in others it does, but to n times n different values. Or, there may only be n values, but arranged so as not to be perfectly symmetric about the diagonal--for example, for an n = 4 situation:
Or, again, there may only be n values, but so arranged as to be symmetric in that sense:
Now, inasmuch as what we are most fundamentally interested in is identifying outcomes that not only represent the entropy-maximized reformulation of the original numbers but whose internal relations project as a three-dimensional, Euclidean, space, it is these latter, symmetrically arranged, cases that we will be "on the lookout" for in what follows. The reason we will be looking for the last kind of matrix shown above is that this is the only one that can be reformulated (through a multidimensional scaling operation, specifically) into a set of relationships among the n elements that are: (1) equidistant from the origin on all three axes (2) represent complementary symmetry (the measure of ij is the same as the measure of ji) (3) absolutely, not fuzzily constituted (e.g., the multidimensional scaling operation produces a reformulation with a zero stress statistic measurement). It appears that only this combination of eventualities can be interpreted as an unambiguous three-dimensional space. I have looked at various statistical summaries of the results as they are output as n by n matrices of z scores--various correlation matrices, the number of iterations required to converge to stable values, etc.--but for the moment there is only one other (pair of) statistic(s) that seems fundamental to the discussion. From the original matrix of values that is fed into the double-standardization algorithm one can calculate a standard correlation matrix; the degree to which each of the n columns of flows, similarity measures, spatial autocorrelation measures or the like are intercorrelated with the other columns may then calculated by averaging each column of the matrix. For example, if the input data are as follows:
the corresponding correlation matrix is:
with the means of the four columns being, respectively: 0.2285, -0.0233, 0.2219, and -0.2075, and an overall mean (once these four are averaged) of 0.0549. A second and related statistic is the total of the absolute values of the four averages, in this case 0.6812. Because the entropy-maximized (double-standardized) conversions of the original scores have zero intercorrelation, individually and in sum, these two measures of "deviation of independence" can be viewed as possible measures of lack of equilibrium in a particular system. I realize that at this point some readers will be put off by the statistics, whereas others will be dismissive or clamoring for a good deal more detail. I can only say to members of the first group that things will now get considerably easier, and to the latter that I am more interested in describing the basic nature of the approach at this point than I am attempting to resolve every detail as it comes along, and that it is my hope that you will not pass judgment until you have viewed the whole presentation.
We may now proceed to a quick summary of the simulations I performed using various kinds of random numbers as input for the double-standardization (entropy maximization) operation. The first order of business is to outline what kinds of findings we should anticipate, or at least be initially worried over. I have so far only referred to the number of dimensions and classes we might be working with here in terms of the algebraic value "n"; indeed, even if we accept the basic idea of the model it is not obvious up front what this value should or might be, or for that matter whether the model (and reality) functions under more than one value for n. So, the first thing I did was to examine double-standardizations performed on input matrices with a range of dimensions n equals three through seven. The
results of each set of analyses could then be looked at for either of
two conditions that would make proceeding further impossible, or at best
difficult. If, in fact, a large number of randomized values were input
for a particular dimensional solution and The first time I performed these simulations, around fifteen years ago, I made three minor errors. First, in producing summary charts of the results, I lumped together two categories of results I shouldn't have; second, instead of inputting wholly random matrices of n dimension, I only input matrices of random values that were symmetric about the diagonal (thus, although the i not j values were independent random values, each ij was equal to each ji); third, I didn't eliminate the half of the "successes" whose diagonal values were (large) negative values. For each dimension = n condition I ran 2000 simulations (which on the PCs of that time took quite an amount of time!). The results were striking. For dimensions 3, 5, 6, and 7 there was not a single outcome that passed the test. By contrast, for n = 4, about one percent (including a recalculation to account for the mistakes noted above) of the double-standardized matrices passed the test: that is, they consisted of results that could be projected as a three-dimensional space. At
that point, I gave up on further tests on matrices of other than four
dimensions. Just recently (April 2009) I re-performed the random numbers
input simulation on n = 4 data using the two variations: (1) 4 x 4 matrices
of fully random numbers and (2) 4 x 4 matrices of symmetrically random
numbers (as before). (For the first analysis I ran 25,000 input matrices;
for the second, 40,000. Maximum number of iterations allowed was increased
to 9999998; this made it possible to more accurately assess and classify
type of convergence.) These results are both intuitively and logically satisfactory. To begin with, the percentage of starting configurations that pass the test, while small, is perhaps just about right in terms of a universe whose self-organization capacity is faced by difficulties of neither trivial nor impossible measure. More importantly, the n = 4 solution is in keeping with at least two expectations in this general arena. One is, as we all know, is that it takes x + 1 coordinate points to specify an x-dimensional geometry: on this basis alone, on consideration of the style of model I am presenting, one might expect the magic number of interacting forces to be exactly four. And beyond this, we should also expect four to be the number because the matrix problem involved represents, in essence, the solution to an equation with four roots: practically no five root equations are solvable (not to mention six, seven, or yet higher order ones), and one might expect real structures to be organized on the basis of such tradeoffs. In the case of n = 3, there is only one solution to the problem that sustains symmetry of the type required (and it is only rarely achieved); such a system would seemingly have no potential for change and thus becomes an equally unlikely candidate. As we shall see, the prior establishment of a form with limited dimensions in extended space leads to a greater ability to "pass the test." We now proceed on to some simulations of this type, meant to directly mimic (though in a very general way) real world conditions of spatial extension. _________________________
Copyright 2006-2014 by Charles H. Smith.
All rights reserved. Feedback: charles.smith@wku.edu |