SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : George Gilder - Forbes ASAP

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Clarksterh who wrote (1319)4/22/1999 2:11:00 AM
From: Dan B.  Read Replies (1) of 5853
 
Clark,
Re: "Maybe, maybe not, but it is immaterial from a danger to humans standpoint."

I think it is material, the crux of the matter in fact.
"If the
machines are sufficiently complicated it will become difficult, if not impossible, to
control them"

Why? Creating uncontrollable machines isn't a laudable goal. Neural nets would only be useful if they are in control, i.e. they do what WE see as useful. Until they are in our control, I think they will have limited value to us.

"Already it is true that many neural nets in use work in ways we don't
really understand (i.e. we don't really know when they will fail, how well they work
on new problems, or how well they will work in unforseen situations), and we have
computers writing their own code."

If the computer learns and writes code that doesn't do what is appropriate I don't believe we'd use it- or not for long.

Any system that fails might be a danger, so reliability will be a paramount concern...hence I can't wait to replace windows with something that doesn't crash.

It's easy to envision the appearance of self awareness coming about. But if data never truly becomes self aware- i.e. it never sets out to deceive- we will be in no danger save from failures within systems that are ordinarily well under control. Hence, the probability of danger should be no greater than now. In fact, I think continued improvements in controlling system failures is inevitable, so that even with a greater reliance on bits and bytes in the future, failures may become almost non-existent for practical purposes.

Dan
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext