SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : George Gilder - Forbes ASAP

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Dan B. who wrote (1323)4/22/1999 3:47:00 AM
From: Clarksterh  Read Replies (4) of 5853
 
Why? Creating uncontrollable machines isn't a laudable goal. Neural nets would only be useful if they are in control, i.e. they do what WE see as useful.

Nobody sets out to build an uncontrollable system; they set out to make a system do something that they can't do themselves and in fact may not even understand themselves when it is done by the machine they started. But if you don't understand the resulting system you may have some spectacular and unexpected failures, especially given that the machines must be self modifying (as self training neural nets are to some degree) to solve the hardest of problems. Yes, it is scary, but it is also tempting for problem solvers just as the spinning wheel was to weavers.

If the computer learns and writes code that doesn't do what is appropriate I don't believe we'd use it- or not for long.

Sure, that may be true, but if a plane crashes because of this inappropriateness there are still 100's dead, and a mistake in a nuclear power plant would be even more severe. As the human race progresses our machines' mistakes become ever more costly. I absolutely guarantee that self-modifying code already exists in limited (but useful) applications now, and it is almost inconceivable that it shouldn't be much more prevalent in 20 years. People will try to control it, and place limits on it, but if you don't really understand what it is doing that becomes hard to do. My point here isn't that it is impossible to do this safely, but that it will require as much (or more) vigilance as genetic engineering to guard against the law of unintended consequences.

It's easy to envision the appearance of self awareness coming about. But if data never truly becomes self aware- i.e. it never sets out to deceive- we will be in no danger save from failures within systems that are ordinarily well under control.

Is that the nature of self awareness? The ability to deceive? If so, then I have met many a self aware machine since, in troubleshooting hardware and software, they often trick me.<g> My somewhat flippant (sorry about that) point here is that fooling a person does not require self awareness (however it is defined), it just requires sufficient complexity. Complexity sufficient to simulate a human being is probably sufficient to be very difficult to control whether the machine is truly 'conscious' or not. (BTW, there is no reason that a machine couldn't deceive as part of its strategy. A good war game computer might have to, as might a good psychoanalysis package or a computer making decisions for the Fed.)

Maliciousness (a conscious act) is not required to destroy the human race.

Clark
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext