The Once and Future Wallace

 

Simulations: Introduction, and Random Numbers.


Introduction

     One of the appeals of this line of reasoning is that the structures it foresees as constituting physical, extended, space can be modeled in the abstract through the aid of simulations. I claim no extraordinary level of skill in this kind of work, but even so have been able to make some progress in this direction through two simple approaches.

     The first, and more straightforward, involves directly manipulating the actual n by n matrix of scores undergoing double-standardization. Prior to the latter operation one can, of course, feed whatever numbers one wishes to directly into each matrix, for whatever selective effect.

     The second ends with the same outcome (the double-standardization of an n by n matrix of values), but begins with the establishment of a limited surface or volume metric: perhaps, for example, the surface or interior of a sphere or spheroid. According to purpose, one then simply imposes a group pattern upon/within the space established--specifically, a pattern constituted of n classes, or zones, of fundamental elements. A sample grid is then superimposed on/over the pattern, with n sets of class-affined coordinate locations recognized and grouped. The overall relative spatial position of each class with respect to each other is then calculated via a spatial autocorrelation algorithm (in this case, the variety based on sums of squares methods, not those applying additive contiguities). This operation yields an index value for each ni to nj class relation, and overall an n by n matrix. The latter matrix is then subjected to the double-standardization procedure.

     The second approach may be applied to an infinitely large number of geometries and class patterns, but one must be careful to avoid some potential pitfalls in attempting to relate such simulated results to real world systems, as will become more apparent as we proceed.

Random Conditions Simulations: Initial Matters

     Before describing the sets of simulations I have carried out that employed the first approach described above, it is best to quickly review how the results are calculated and related to the object here. However the numbers are arrived at that are actually double-standardized, the latter operation will produce an n by n matrix of z scores. I have identified about a dozen distinguishable forms of results of these: for example, in some instances the iterative operation never produces a convergence to stable values, while in others it does, but to n times n different values. Or, there may only be n values, but arranged so as not to be perfectly symmetric about the diagonal--for example, for an n = 4 situation:

 1.2344 -1.0426 -0.9238  0.7320
-0.9238  1.2344  0.7320 -1.0426
-1.0426  0.7320  1.2344 -0.9238
 0.7320 -0.9238 -1.0426  1.2344

     Or, again, there may only be n values, but so arranged as to be symmetric in that sense:

 1.2344 -0.9238 -1.0426  0.7320
-0.9238  1.2344  0.7320 -1.0426
-1.0426  0.7320  1.2344 -0.9238
 0.7320 -1.0426 -0.9238  1.2344

     Now, inasmuch as what we are most fundamentally interested in is identifying outcomes that not only represent the entropy-maximized reformulation of the original numbers but whose internal relations project as a three-dimensional, Euclidean, space, it is these latter, symmetrically arranged, cases that we will be "on the lookout" for in what follows.

     The reason we will be looking for the last kind of matrix shown above is that this is the only one that can be reformulated (through a multidimensional scaling operation, specifically) into a set of relationships among the n elements that are: (1) equidistant from the origin on all three axes (2) represent complementary symmetry (the measure of ij is the same as the measure of ji) (3) absolutely, not fuzzily constituted (e.g., the multidimensional scaling operation produces a reformulation with a zero stress statistic measurement). It appears that only this combination of eventualities can be interpreted as an unambiguous three-dimensional space.

     I have looked at various statistical summaries of the results as they are output as n by n matrices of z scores--various correlation matrices, the number of iterations required to converge to stable values, etc.--but for the moment there is only one other (pair of) statistic(s) that seems fundamental to the discussion. From the original matrix of values that is fed into the double-standardization algorithm one can calculate a standard correlation matrix; the degree to which each of the n columns of flows, similarity measures, spatial autocorrelation measures or the like are intercorrelated with the other columns may then calculated by averaging each column of the matrix. For example, if the input data are as follows:

1.730 1.697 1.750 1.678
1.697 1.761 1.676 1.679
1.750 1.676 1.772 1.659
1.678 1.679 1.659 2.056

the corresponding correlation matrix is:

  1.0000000 -0.3124276  0.9889021 -0.7624700
 -0.3124276  1.0000000 -0.4073546 -0.3735961
  0.9889021 -0.4073546  1.0000000 -0.6938935
 -0.7624700 -0.3735961 -0.6938935  1.0000000

with the means of the four columns being, respectively: 0.2285, -0.0233, 0.2219, and -0.2075, and an overall mean (once these four are averaged) of 0.0549. A second and related statistic is the total of the absolute values of the four averages, in this case 0.6812. Because the entropy-maximized (double-standardized) conversions of the original scores have zero intercorrelation, individually and in sum, these two measures of "deviation of independence" can be viewed as possible measures of lack of equilibrium in a particular system.

     I realize that at this point some readers will be put off by the statistics, whereas others will be dismissive or clamoring for a good deal more detail. I can only say to members of the first group that things will now get considerably easier, and to the latter that I am more interested in describing the basic nature of the approach at this point than I am attempting to resolve every detail as it comes along, and that it is my hope that you will not pass judgment until you have viewed the whole presentation.

Random Conditions Simulations

     We may now proceed to a quick summary of the simulations I performed using various kinds of random numbers as input for the double-standardization (entropy maximization) operation. The first order of business is to outline what kinds of findings we should anticipate, or at least be initially worried over.

     I have so far only referred to the number of dimensions and classes we might be working with here in terms of the algebraic value "n"; indeed, even if we accept the basic idea of the model it is not obvious up front what this value should or might be, or for that matter whether the model (and reality) functions under more than one value for n. So, the first thing I did was to examine double-standardizations performed on input matrices with a range of dimensions n equals three through seven.

     The results of each set of analyses could then be looked at for either of two conditions that would make proceeding further impossible, or at best difficult. If, in fact, a large number of randomized values were input for a particular dimensional solution and none of the double-standardized z scores were of a symmetric form with the largest value along the diagonal, then the model fails immediately (or at the least requires some dramatic re-formulation!). Conversely, if all the outcome z scores are of this form, we have a situation which immediately makes any form of test much more difficult, or even technically or philosophically impossible.

     The first time I performed these simulations, around fifteen years ago, I made three minor errors. First, in producing summary charts of the results, I lumped together two categories of results I shouldn't have; second, instead of inputting wholly random matrices of n dimension, I only input matrices of random values that were symmetric about the diagonal (thus, although the i not j values were independent random values, each ij was equal to each ji); third, I didn't eliminate the half of the "successes" whose diagonal values were (large) negative values. For each dimension = n condition I ran 2000 simulations (which on the PCs of that time took quite an amount of time!). The results were striking. For dimensions 3, 5, 6, and 7 there was not a single outcome that passed the test. By contrast, for n = 4, about one percent (including a recalculation to account for the mistakes noted above) of the double-standardized matrices passed the test: that is, they consisted of results that could be projected as a three-dimensional space.

     At that point, I gave up on further tests on matrices of other than four dimensions. Just recently (April 2009) I re-performed the random numbers input simulation on n = 4 data using the two variations: (1) 4 x 4 matrices of fully random numbers and (2) 4 x 4 matrices of symmetrically random numbers (as before). (For the first analysis I ran 25,000 input matrices; for the second, 40,000. Maximum number of iterations allowed was increased to 9999998; this made it possible to more accurately assess and classify type of convergence.) The percentage of operations that passed the test in each instance was: (1) 0.516 %, and (2) 1.73 %.

     These results are both intuitively and logically satisfactory. To begin with, the percentage of starting configurations that pass the test, while small, is perhaps just about right in terms of a universe whose self-organization capacity is faced by difficulties of neither trivial nor impossible measure. More importantly, the n = 4 solution is in keeping with at least two expectations in this general arena. One is, as we all know, is that it takes x + 1 coordinate points to specify an x-dimensional geometry: on this basis alone, on consideration of the style of model I am presenting, one might expect the magic number of interacting forces to be exactly four. And beyond this, we should also expect four to be the number because the matrix problem involved represents, in essence, the solution to an equation with four roots: practically no five root equations are solvable (not to mention six, seven, or yet higher order ones), and one might expect real structures to be organized on the basis of such tradeoffs. In the case of n = 3, there is only one solution to the problem that sustains symmetry of the type required (and it is only rarely achieved); such a system would seemingly have no potential for change and thus becomes an equally unlikely candidate.

     As we shall see, the prior establishment of a form with limited dimensions in extended space leads to a greater ability to "pass the test." We now proceed on to some simulations of this type, meant to directly mimic (though in a very general way) real world conditions of spatial extension.

_________________________


Continue to Next Essay
Return to Writings Menu
Return to Home

Copyright 2006-2014 by Charles H. Smith. All rights reserved.
Materials from this site, whole or in part, may not be reposted or otherwise reproduced for publication without the written consent of Charles H. Smith.

Feedback: charles.smith@wku.edu
http://people.wku.edu/charles.smith/once/spa2.htm