Criticizing Linguistics/Shared Cognitions

Salinas17 at aol.com Salinas17 at aol.com
Wed Oct 3 15:32:09 UTC 2007


Wolfgang - Thanks for the reply.  Sorry I couldn't answer sooner.
I wrote: 
"Looking at it from this point of view -- yes, cognition gave and gives rise 
to language.  But not as an added bonus.  Rather, language was and is a 
solution to the PROBLEM of the private nature of individual cognition.  As a matter 
of evolutionary survival value, SHARED cognitions -- and information about 
their consequences -- supply the individual with much more useful information 
than the much smaller set of cognitions he might have on his own."

You answered, in a message dated 9/29/07 3:51:24:
<<This model reminds me a bit of the Multiple Instruction Multiple Data 
architecture of super-computers. Maybe that such a network of shared cognitions had 
an evolutionary survival value. But this type of network presupposes that all 
its members are marked for the same basic properties that enable the 
functioning of the network.>>

You've surprised me here.  First of all, "shared cognitions" -- shared 
through language -- would obviously tend to have some plain survival advantages over 
entirely individual cognitions -- just from the point-of-view of all the 
second-hand information that would not otherwise be available to an individual. 

You've definded cognition to include perceptual processes that "guarantee the 
individual's 'orientation' in the Outer World."  

The Outer World is a big place and there's an obvious advantage to not being 
limited to one's individual perceptions about external events.  

A sign that says that the bridge is out saves me from going forward and 
plunging to my death.  That sign contains someone else's perception and cognition.  
And when I read it, it becomes a "shared cognition" -- one that I would not 
have generated on my own and is not based on my own experience about the bridge.

Likewise, I've never been to Munich, but I'm confident that I know where it 
is and how to get there -- but not based on my individual perceptual processes 
or my personal orientation in the Outside World.  In fact, I'm totally 
dependent on the "shared cognitions" of others for any such "orientation".  Without 
them, I wouldn't even know there was such a place, much less know that there 
was an airplane that could take me there.  

I also do not understand why you would say that humans are not "marked for 
the same basic properties that enable the functioning of the network."  The 
"network" of shared cognitions functions primarily through language -- and at 
least humans who can speak the same language certainly share properties that 
enable its functioning.

I'm not sure that one needs to go to super-computers to find an analogy.  
We're not talking about many processors linked to make one big processor.  The 
better analogy is a network of individual computers that are capable of sharing 
processed data.  An example is what we are doing here on Funknet.

I wrote:
"In this view, language would have arisen as an answer to the disadvantage of 
individual information gathering and storage -- individual cognition, if you 
prefer."

You replied:
<<Well, this would be the 'standard' model, I think. Accordingly, language 
would have emerged from processes within the network, but not from processes 
within its 'components'.>>

I'm also surprised that you call this the standard model.  

If language is in fact a solution to the poor information gathering and 
storage capabilities of individual "cognitions" -- then we might expect individual 
cognitions contributing very little to the overall information embedded in 
language.  And, vice versa, language contributing a huge portion of the 
information used by individual cognitions.  

The entire Generativist movement is certainly a rejection of this idea -- 
because it locates the structure of language in the individual and not in the 
Outside World, where the great bulk of information is that needs to be gathered 
and structured (processed) -- and where communication, which demands a certain 
structure, occurs.  The pressures that would shape language would come from 
the Outside World -- otherwise language would be very ineffective at -- in your 
words -- guaranteeing orientation -- or at least accurate orientation.  

And it's not accurate to call this model something like "interactionist" 
either.  Because interacting with an environment does not necessarily account for 
unique flow of information that occurs with language.  I interact with my 
toaster and its dials, but the flow of information is quite limited. 
 
You also wrote:
<<But my point is that many (if not most) properties of language can be 
(in)directly related to the architecture of the components that establish this 
network, that is to 

those basic properties of cognition that are universally present in any 
cognition (such as structuring via perception/experience, symbolization routines, 
(re)presentational strategies, metaphorical potential and so on).>>

I'm wondering, Wolfgang, whether some of the items you mention here aren't a 
backwash from language into "raw cognition."  Consider your own definition of 
cognition which I quoted above -- it does not mention most of these, only 
neural and perceptual processes and orientation in the Outer World.  When we bring 
in such things as "metaphorical potential," doesn't it possibly suggest that 
some if these attributes might be the gift of language to cognition, rather 
than the other way around?

The processes that make a pile of stones are not the same processes that 
build a stone house.  But in the process of building a stone house, we might cut 
the stones into blocks, so they serve that purpose better.  In the same way, we 
might re-form our raw cognitions to make a better fit for language.

This comes into focus more clearly I think when we consider non-human 
cognition.  Your definition of cognition ("the continuum of those neuron-based and 
perception-guided 

processes that guarantee the individual's 'orientation' in the Outer World") 
does not appear to exclude non-humans.   And yet so much research and common 
knowledge suggests a strong separation between human and non-human cognition.  
Perhaps the difference is the enormous quantitative differences in the use of 
language.  

Perhaps a human who has never acquired a human language will only have 
non-human cognitions.

Also you wrote:
<<just a few words (keeping in mind that we discuss this issue on Funknet, 
but not on CogLing, we shouldn't go into all the details)>>

Wolfgang, I'm not understanding why this forum would be less appropriate than 
that forum.  Is there something in our subject matter that is inappropriate 
for FUNKNET?

Regards,
Steve Long






















<BR><BR><BR>**************************************<BR> See what's new at 
http://www.aol.com</HTML>



More information about the Funknet mailing list