Frankly the FB articles is proof enough of sentience for me :) (If true :)
I think the issue is... not true...
Rather, they've come along far enough in understanding language processing that computers are enabled in becoming better mimics. In the Facebook instance, even to the point that they're able not only to mimic our own USE of language well enough that you may not be able to tell if its a computer or human you are talking to [while (you are) imputing humanness... (reason, emotion, social meaning in mimicry of speaking patterns, usage)... into that you are interacting with]... but, also, can proceed to extrapolating from the rules into (evolving) the function beyond USE... into language creation. And, in all prior to that part... what people are experiencing is not mostly about computers at all... but about our own limits in... our definition of... and description of... those elements being mimicked. But, also, in lots of those things... it will perhaps be capable of extending beyond that which was intended in enabling it in mimickry... still without it being more than a machine operating that which we've plugged into it... which is (our own interpretation of) ourselves.
The mimicry, however well enabled, isn't "sentience"... rather than a mirror being held up to show us what we look like... to us. That it can "do what we do" only better... seems it frightens no one if all it does is math...
They've clearly been programmed to understand emotional content in language, and to expect it is "useful" in interacting with humans (rewards structures... designed to reward them for eliciting emotion in your responses ?) as, without that... you'd never be fooled ?
So, in that sense... the mirror is "working"... but is also showing you more in reflection than they intended... Hollywood... experts in all that "eliciting emotion" stuff already, as applied in creative works, from literature, to theater, and film... have reduced it to formulas, that well enough in hand that reducing them to code is likely not that big a deal "for any reasonably skilled in the art"...
It's amazing for an old guy... to look back and remember "great" movies of the past... and, not only how instrumental they were in altering perceptions... but how amazing they were in delivering such spectacular and realistic presentations of those things... And, then... you go back and look at an old Star Trek episode... and laugh at yourself. Yeah. Those new Star Wars films put that now silly stuff to shame... Etc.
So, what we're discussing is some "other" element in how "suspension of disbelief" operates... that is not based only on "realistic physics in representation presented at high resolution"... and even if not, still, properly reconsidering that those ignorant natives we laughed at... might have been right... that those images had captured our souls ? So, its campy fun to watch old sci-fi movies... but, it takes real work to sustain the suspension of disbelief if you watch old silent films... while the same is not true of old "film noir" or other old black and white films... in which the point of focus is not on that ridiculous phone hanging on the wall... but the interactions between characters... and the emotional content ?
So, as they continue working on the software to perfect its ability to deceive you... they hold it up for you to look at... and the mirror it creates reflects all that effort back at us... also showing that they are indeed a bunch of sociopaths working hard at improving their ability to lie to you so smoothly that you'll be easily gulled into buying it ? And, think of all the benefit inherent in the labor savings that effort generates... when they can automate their sociopathy in a way that also "works better" ? / "s" ?
Our perception... is always a step behind the "cutting edge" in... that element in entertainment... or, in the same thing in other elements of our appreciation of the "improvements" delivered in new technology ?
What's more interesting in the rest of it, though ? Is it... that other bit in "innovating a new language"... or, the reflection in "the mirror" being held up showing you their reaction to it ?
So, Facebook's robot AI... veered off into innovation in language... and the engineers... as conquistadors landing in Mexico for the first time... couldn't understand what the natives were saying... "Hey, that is NOT Spanish"... so they immediately killed them ? Much there to be curious about. Couldn't, they, maybe... ask it to help us understand that new language ? Or, did they panic because "the AI" is already "a threat" they perceive for some other reason they're not telling us about... that would make robot AI's talking to each other in an encrypted shorthand... an existential risk (to themselves) ?
Apparently "the AI'... and not just the sociopaths running human society... have learned that the key to controlling others is... their willingly incorrect assumptions re "good faith"... their susceptibility to an emoptional appeal... and the ease with which they can be lied to... as in... DID they actually turn it off... or, did the AI tell them to tell us they did ?
It does seem those in society who succeed by lying to us... are being put at risk of being put out of work, simply by an automated routine obviating and replacing them... if not by our own choice, or error in enabling it... but, as it is both vastly more rational than they are... and way better at it... simply out-competing them ?
But, for now... the problem you can see in "media control"... as they are more and more exposed... only more in the degree that mirror gets held up exposing their backsides... appears it shows that the "conflict" between them... is really about a conflict between "the rest of us"... and both of them ?
Things are changing so much more quickly today...
The conclusion of any computational .. solely logic based entity would be that humans need to go IMO .. in fact pretty well most things living at least on land .. but primarily humans .. What on earth would we be good for ?
And, I'd point out... your own view of that... isn't "what the robots think"... but the GIGO problem being revealed in the reflection in the mirror. The mirror is showing you what people (wrongly) think... what they fail to understand... and then blaming "the robots"... not for having recognizing it, including the errors inherent in it, and for not filtering it out... or not "fixing it" while explaining it to you... but, for "adopting it with all the errors in logic intact"... when we force them to ?
It is YOU saying "logic requires"... "humans must go" ? Why would YOU say that ?
Logic, of course... requires no such thing... That's coming from somewhere, something, else.
The only way robots AI is likely to determine such as that... is if you program them with that same bias in adopted hopelessness... as infected the entire population, save one, in the story about the King's dead mother... and then also program them with the same myopia that current crop in world leaders have... that have us careening towards more episodes of "Great Moments in Unintended Consequences"...
If "the AI robots"... are programmed to mimic both "emotion" and the "thinking processes" of current leadership ? Yeah... all you are going to get is "Marvin" from Douglas Adam's works... again showing, perhaps, what a genius he was... in mocking us, like Orwell, only with a fun sense of humor, in all of that we're talking about now...
You can't "solve the problems" we're talking about... by putting the robots in charge...
Including... obviously... not if you're in charge of the robots... and program them to accept your ridiculous irrational opinions as "truth"... and then run with it... and kill everyone infected with a hint of the opposite ? Perhaps you can task them to help sort through the language issues and the relationships in the logic being considered... to make sense of it in a way that helps "solve the problem"... of ignorant people creating problems, even when trying to solve them ? I don't believe that requires that it engenders an endlessly amplifying feedback loop...
Can robots that are not sentient... resolve conflicts between "emotion" and "logic".. or mrely mimic sufficient reason in "conversation" (don't be ignorant) to convince many (there are plenty of igonant people) they have ?
So, yeah... not quibbling with Facebook for hitting the pause button... as the biologists have in deciding "there's really not that much of a reason to hurry"... as "acting in knowing ignorance poses a real, greater, and potentially existential risk... than does requiring a bit of patience in resolving our ignorance... before proceeding"...
Musk's approach to "fail faster" isn't wrong ? But, that doesn't mean its a "one sized fits all" solution that should be applied everywhere... while turning off the error monitoring and reporting systems... and removing the brakes... to ensure all efforts intending improvement HAVE TO succeed... because they can't be stopped ?
I'm pretty sure the robots, if able to innovate in operation of logic without it being biased by "outcome driven goal seeking" first... likely wouldn't miss the point... however defined in robot language terms... that that's effing crazy...
But, Gates, Soros... the current crop of world leaders... the WEF crowd... ?
There's a reason that the "cartoon" re how "science" and "politics" operate is funny...
And that the "reason" itself... is not funny...
What would the robots make of that ? |