emergence

Clayton Gillespie clayke at DELPHI.COM
Sun Aug 1 05:06:35 UTC 1999


Heya FunkNeteers,

I hope you'll find enough usefulness in this comment from the math folk
to forgive the introduction of vocabularies rarely seen on this list:

Introduction
--------------
Signals can be viewed as simple or complex by virtue of how compressible
they are.  High compressibility corresponds to simple signals.

Compression is a process of looking for patterns and replacing them with
shorter pointers to constructors for those patterns.  It might be useful
to think of linguistic interpretation and generation in these terms:
compression into salient features and ultimately into concepts,
construction of symbols/representations and ultimately of speech acts.

So far a very simple analogy, but formal compression[1] has some very
useful properties.
Chiefly, the compression algorithm self-modifies, so if you accept the
premise of this analogy it might be said to learn and thus crudely model
speech acquisition.

"Unexpected" Emergence
------------------------------
In this model, the compression process, the process of acquiring the
specific rules that best shorten the given signal, builds from highly
repetitious localized phenomena toward more widespread "rules of
thumb".  As it progresses toward larger and larger frames of data
comparison, the algorithm may recast earlier rules.  This recasting
could be considered a formal equivalent to the "unexpected" kind of
emergence.

Emergence is very much associated with the vocabulary of complexity
theory, and there is some additional value to be gained by exploring
that territory.  Besides having the characteristic of self-modification,
formal compression is also quantifiable.  This quantification is a
direct measure of complexity in the formal sense; and if Church's Thesis
is correct, then this is a universal measure of complexity as well.
(Since the resulting measure of compressibility is uniformly comparable
to other such measures, we need not worry about the particulars of the
kernel algorithm at this time.  Note that measuring compressibility is
distinct from determining a compression ratio.)

Since the analogy of compression introduces the whole of complexity
theory we can now alter our analogy.  We may view the steps of the
generated compression rules as a system of constraints (e.g., the
constraints on what constitutes a grammatically acceptable utterance).
Going backwards, we may also consider our signal to be a larger
constraint network and our process of compression a process of
propagating the constraints toward a normalized form.

Constraint networks are convenient because they lend themselves nicely
to topological descriptions.  We may use textural terms to describe
small subsets of constraints:  they may be called "compatible" if their
a/d-ratio is high; we may say they are "fluffy" if their b-values tend
to be high; and we may say they are "dense" (not opposed to "fluffy") if
their g-values (a.k.a. secondary b-values) are low.  As we move to
larger subsets of constraints we may find that they are well-described
by nice aggregate topological terms such as "ropey", "looped", "lobed",
"cusped", "foamy", etc.[2]  Our terms are now scoped by the size of the
frame we are viewing.

But having gone backwards to acquire this vocabulary descriptive of the
behavior-structure we are interested in, it becomes necessary to try to
reestablish the "emergence" component of the analogy.  Immediately we
find this highly problematic.  Every step of constraint propagation
renders a new network representation that is isomorphic to the original,
the step has no scoping frame size, and no step shows radical
differences in formulation (or perhaps we might say all of them do).
Consequently, there is no longer a dramatic interpretive shift like we
saw before.  The closest we may come is that we may observe that the
rate at which the size of the network representation is reduced may
increase sharply at certain points in the process.

I now emphasize the phrase "at certain points" because I think it begins
to reconcile the compression and constraint network analogies that seem
to represent the "event" ("unexpected") and "state" notions of
emergence.  The two formulations seem to indicate that the perception
that something emerges is dependent on three things:  the structure of
the signal/system itself, the order in which operations are applied to
the signal/system, and the threshold of unexpectedness being looked
for.  So the "unexpected" kind of emergence is a perceptual event, and
insofar as perception is always perception of something it is dependent
on the properties of the thing-in-itself and the perceptual/generative
process.

We are limited in what we can test about perceptual events.  Because we
are talking about a learning structure, experimentation amounts to
alteration of the structure either directly or via alteration of the
signal on which it operates.  (For example, if we interrogate the
behavior of an interviewee to determine whether s/he has a linguistic
concept of "subject", we may, by virtue of the clues we give during the
interrogation, lead him/her to create such a concept in order to respond
to our questions.)  Passive observations can only effectively explore
what is common.  So it is very hard to make statements about the limits
of possibility (and therefore about the nature of any regularity we
find) based on behavioral data alone.

But the fact that it is difficult to make clear statements about the
limits of possibility doesn't preclude useful study of perceptual
events.  I suspect that at some level emergence events are marked as
pleasurable:  we seem to gain more satisfaction from epiphany than from
gradual accumulation of knowledge (which, I think, explains the
fascination people have with this subject).  So as long as we aren't
trying to predict emergence outside of what might be considered
well-explored paths we may still find this kind of emergence a useful
concept.

Emergent State
------------------
We may, instead of looking at momentarily high rates of compression (the
"emergence event"), want to look at the total compression at a certain
point in the process (the supposedly emergent state) and try to predict
if the then current set of compression rules will largely persist and
why.

Again it is necessary to scope the question before attempting to
answer.  A single connection is sometimes enough to change the
topological class of a network, so any statements about structure
implying function must be proximally bounded.  However, this kind of
proximity is not necessarily spatial in the usual sense.  Constraint
networks are literally N-dimensional, so the operative proximity may not
correspond to what we consider R-space; and Liz's observations point out
that not only is this possible but it actually seems to be the case for
some brain functions.  However, since neurons don't physically stretch
from ear to ear, the order of any rhisomic qualities of our
learning/learned networks is strongly bounded by the order of neuronal
(R-space) proximity.  (Which, perhaps, went without saying; but when
introducing new vocabularies sometimes it's a good idea to show that
you're not a complete nut.)

But assuming that we have scoped our question, what can we say about the
persistence of emergent states?  Unfortunately, again not much.  Either
we are considering the fully normalized network within our scope, at
which point the structure is the same as the implication, or else we are
making a scoping error.

Another way of saying that is that all constraints are potentially
dependent on every other constraint, and they cannot be partitioned off
until one can prove that the constraints in other putative partitions
have no local constraining effect.  This cannot be proved without
normalizing the network up to the boundaries of the proposed partition.
By examining the interaction of two such partitions we change our scope
and so resolve any sense of emergence we previously had.  So in speaking
this way we are always talking about a structure that will persist
unchanging, and the dynamic concept of emergence is completely lost -
except as a historical or evolutionary note (kind of like when you carry
the 1 in addition).

So that's my little story about emergence.  I'd be curious to know if
people find it interesting or merely bizarre.

Thanks for your time,
   - Clayton Gillespie
     E-lectra


[1]  By formal compression I mean what is usually addressed in
algorithmic complexity theory, about which Kolmogorov, Chaitin, and
Solomonoff have written extensively.  The primary source I'm using is:
Li, Ming & Vitányi, Paul. 1993. "An Introduction to Kolmogorov
Complexity and its Applications"  New York: Springer-Verlag.

[2]  Of course, since constraint networks are N-dimensional we are
likely to run out of useful topological vocabulary anyway, but at least
this will get us a little further.



More information about the Funknet mailing list