SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Evolution

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: average joe who wrote (27785)6/30/2012 11:46:26 AM
From: Solon2 Recommendations  Read Replies (2) of 69300
 
"A failed Wall Street banker following his example would look not merely for a government bailout but would insist on taking over both the Fed and Treasury."

Over and over again that writer reveals such damning evidence against Dennett's paper! (HA!) I was not surprised to find the entire article a rather deliberate attempt to invent "errors" by selectively misrepresenting context and focus. Typically weasel approaches when fighting for life while pinned in a corner (yes, this is exactly how Science makes these survivors of a disappearing species feel).

But back to being serious for a moment and quoting directly from Dennett (so much more of a real read than that other guy who was trying to shred truth rather than reveal it:

theatlantic.com

"To this day many people cannot get their heads around the unsettling idea that a purposeless, mindless process can crank away through the eons, generating ever more subtle, efficient, and complex organisms without having the slightest whiff of understanding of what it is doing."

and...

"The very idea that mindless mechanicity can generate human-level -- or divine level! -- competence strikes many as philistine, repugnant, an insult to our minds, and the mind of God."

But now Darwin, Turing, and many many others have shown how competence leads to comprehension!

"Turing, like Darwin, broke down the mystery of intelligence (or Intelligent Design) into what we might call atomic steps of dumb happenstance, which, when accumulated by the millions, added up to a sort of pseudo-intelligence. The Central Processing Unit of a computer doesn't really know what arithmetic is, or understand what addition is, but it "understands" the "command" to add two numbers and put their sum in a register -- in the minimal sense that it reliably adds when called upon to add and puts the sum in the right place. Let's say it sorta understands addition. A few levels higher, the operating system doesn't really understand that it is checking for errors of transmission and fixing them but it sorta understands this, and reliably does this work when called upon. A few further levels higher, when the building blocks are stacked up by the billions and trillions, the chess-playing program doesn't really understand that its queen is in jeopardy, but it sorta understands this, and IBM's Watson on Jeopardy sorta understands the questions it answers.

Why indulge in this "sorta" talk? Because when we analyze -- or synthesize -- this stack of ever more competent levels, we need to keep track of two facts about each level: what it is and what it does. What it is can be described in terms of the structural organization of the parts from which it is made -- so long as we can assume that the parts function as they are supposed to function. What it does is some (cognitive) function that it (sorta) performs -- well enough so that at the next level up, we can make the assumption that we have in our inventory a smarter building block that performs just that function -- sorta, good enough to use.

This is the key to breaking the back of the mind-bogglingly complex question of how a mind could ever be composed of material mechanisms. What we might call the sorta operator is, in cognitive science, the parallel of Darwin's gradualism in evolutionary processes. Before there were bacteria there were sorta bacteria, and before there were mammals there were sorta mammals and before there were dogs there were sorta dogs, and so forth. We need Darwin's gradualism to explain the huge difference between an ape and an apple, and we need Turing's gradualism to explain the huge difference between a humanoid robot and hand calculator.

The ape and the apple are made of the same basic ingredients, differently structured and exploited in a many-level cascade of different functional competences. There is no principled dividing line between a sorta ape and an ape. The humanoid robot and the hand calculator are both made of the same basic, unthinking, unfeeling Turing-bricks, but as we compose them into larger, more competent structures, which then become the elements of still more competent structures at higher levels, we eventually arrive at parts so (sorta) intelligent that they can be assembled into competences that deserve to be called comprehending. We use the intentional stance to keep track of the beliefs and desires (or "beliefs" and "desires" or sorta beliefs and sorta desires) of the (sorta-)rational agents at every level from the simplest bacterium through all the discriminating, signaling, comparing, remembering circuits that compose the brains of animals from starfish to astronomers.

There is no principled line above which true comprehension is to be found -- even in our own case. The small child sorta understands her own sentence "Daddy is a doctor," and I sorta understand "E=mc2." Some philosophers resist this anti-essentialism: either you believe that snow is white or you don't; either you are conscious or you aren't; nothing counts as an approximation of any mental phenomenon -- it's all or nothing. And to such thinkers, the powers of minds are insoluble mysteries because they are "perfect," and perfectly unlike anything to be found in mere material mechanisms.

We still haven't arrived at "real" understanding in robots, but we are getting closer. That, at least, is the conviction of those of us inspired by Turing's insight. The trickle-down theorists are sure in their bones that no amount of further building will ever get us to the real thing. They think that a Cartesian res cogitans, a thinking thing, cannot be constructed out of Turing's building blocks. And creationists are similarly sure in their bones that no amount of Darwinian shuffling and copying and selecting could ever arrive at (real) living things. They are wrong, but one can appreciate the discomfort that motivates their conviction.

Turing's strange inversion of reason, like Darwin's, goes against the grain of millennia of earlier thought. If the history of resistance to Darwinian thinking is a good measure, we can expect that long into the future, long after every triumph of human thought has been matched or surpassed by "mere machines," there will still be thinkers who insist that the human mind works in mysterious ways that no science can comprehend. "

This was a great article. Immediately, one thinks of the competence of the silk worm--or indeed any plant or animal in existence--whether they are building a nest or bringind down prey. Religious people will be the first to insist that these intricate creatures have no comprehension (no privilege of souls, as it were!)--but they cannot escape the obvious implications of gradualism. The truth is that the rabbit, the deer, and the dolphin, and the girl in the dress all have the same building blocks operating at different levels of integration and sampling power.

That there is yet a difference between organic "comprehension" and inorganic "comprenesion is obvious. Owning our sensory apparatus (and thus controlling our programming through what we see, hear, touch, and so forth) is wonderfully sentient...but certainly not the exclusive playground of humans!

Yes, this article is worth reading over several times:

theatlantic.com

Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext