Relativity versus Reality 2

Rob Freeman rjfreeman at email.com
Fri Jun 18 10:39:25 UTC 2004


On Friday 18 June 2004 02:07, Salinas17 at aol.com wrote:
> ...
> Think about this.  Can the mystery of why that pet kitten does not have a
> cell phone, wear designer clothes or eventually manufacture a nuclear bomb
> also only be resolved by postulating an innate Cell Phone Acquistion
> Device, etc -- given essentially the same experience?  Or is it that the
> products of human technology is not "an open and infinite set" like the
> endless string of sentences a human language can produce?  Obviously, our
> biology plays a significant role in our ability to make cell phones, but
> obviously cell phones were not implicit in our biology.
>
> Why do we see language as somehow different than airplanes or cellphones --
> other things that cats and chimps don't do?  Clothes-wearing is almost as
> unique and universally human as language.  Why don't we postulate a Clothes
> Acquistion Device?

:-)

Steve,

I agree with just about everything you write. I don't think there is a
Language Acquisition Device much beyond what we use to learn fashion either.
But I also don't believe that language (and cognition, and quite a few other
systems which are fundamentally a product of generalizations and don't have a
single, central, motivating logic) can be "universally learned".

I don't agree with Chomsky's conclusions, but I am willing to believe he made
some powerful observations. In particular I'm interested in this nexus
between "learning by structural generalizations based on contrasts" (i.e. the
"post-Bloomsfieldian structuralists"), and the "selection from observed
regularity according to innate principles" (Universal Grammar) he was moved
to propose in its place.

I'm interested in it not because I'm convinced we need an innate device to
select universally applicable rules (it seems like it was such an article of
faith on Chomsky's part that such a single, central, motivating logic should
be there, that when he didn't see it he felt we had to hypothesize it, but at
least he realized he couldn't see it!) I'm interested because in the absence
of an innate device he concluded universally applicable rules for language
could not be learned.

This is relevant just because people are still trying to learn such rules. The
silly thing is that they seem to be coming up against the same facts without
drawing many conclusions at all. Machine learning theory is quite active now
that the work can all be automated. Lots of learning experiments have been
carried out. What I hear is they have no trouble learning grammars from
observed regularities in texts. The "problem" is not so much that they cannot
learn grammar from texts, it's that they can learn too many, for every
language!

Machine Learning theorists seem to conclude from this that they "haven't got
it right yet", but isn't it more likely that Chomsky saw the issues more
clearly 50 years ago. He at least outlined the problem clearly, even if he
drew the wrong conclusion. Maybe the correct conclusion is that such a single
"universal" grammar can't be learned because the requisite single, central,
motivating logic is not there (and mark, we're talking about the structure of
language now, Steve, not its function which _is_ central and motivating and
logical...)

Maybe the Machine Learning guys have "got it right" they just haven't asked
the right question! Like "is there a single relevant set of regularities, and
if not how do we find any given regularity we need, when we need it"?

In the face of contrary evidence why do we stick to this absolutist conviction
that there can only be one best way of describing everything? Do we argue
whether populations of people are most fundamentally characterized according
to their height or their intellectual ability? No, we realize that one
characterization or another can be most relevant, depending on the issue at
hand. It is not possible to find a single generalization (a single ranking,
for instance) which simultaneously captures both regularities (in general
ordering with respect to one completely mixes with respect to the other).

Couldn't the same be true for language? More importantly is the evidence that
the same might be true for language? (That sometimes one way of regularizing
the sequence of tokens is more important, and sometimes another, and that in
general regularizing one way completely mixes the data with respect to the
other -- or at least "completely" mixes it to within parameter of agreement
consistent with the function of getting to the fridge and back"...)

That's why I'd like to hear more about what the arguments were when Chomsky
decided language could not be universally learned. Exactly why was it he
decided it could not be universally learned? Is my information right that it
was (at least partially) because such learning resulted in "inconsistent or
incoherent representations"?

Do any currently working Machine Learning people have a comment? Do they tend
to find single regularities or multiple, contradictory regularities which
wash out and create impoverished "wannabe universal" representations? Is
there a conflict between any single generality and clusters of particular
detail which is difficult to explain?

Anybody from the Memory-based Learning fraternity (which advocates making
decisions by ad-hoc generalization over raw data for certain systems)
listening who would care to make a comment?

We know such systems exist: we know there are systems based on generalizations
over collections of facts (like the generalizations we make about human
populations) which fundamentally cannot be described in a single universally
correct way, cannot be "universally learned". Why should that not be the case
for language (and cognition in general)?

Best,

Rob



More information about the Funknet mailing list