To: Fun-da-Mental#1 who wrote (62952 ) 1/25/2001 7:07:13 AM From: long-gone Respond to of 116753 Interesting read, & while not directly realted to gold, it answer what is going on with gold as the growth of artificial intelligence may lead to the end of actual intelligence just as the artificial money has near killed actual money: ZD NET ntelligent machines threaten humankind Tue, 23 Jan 2001 14:12:00 GMT Will Knight Dystopia or utopia: There may be a calamitous menace hidden behind the glorious possibilities of artificial intelligence Science fiction has portrayed machines capable of thinking and acting for themselves with a mixture of anticipation and dread, but what was once the realm of fiction has now become the subject of serious debate for researchers and writers. Stanley Kubrick's groundbreaking science fiction film 2001 shows HAL, the computer aboard a mission to Jupiter deciding (itself) to do away with its human copilots. Sci-fi blockbusters such as The Terminator and The Matrix have continued the catastrophic theme portraying the dawn of artificial intelligence as a disaster for humankind. Science fiction writer Isacc Asimov anticipated a potential menace. He speculated that humans would have to give intelligent machines fundamental rules in order to protect themselves. -------------------------------------------------------------------------------- A robot may not injure a human being or, through inaction, allow a human being to come to harm A robot must obey orders given it by human beings except where such orders would conflict with the First Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. -------------------------------------------------------------------------------- Later Asimov added a futher rule to combat a more sinister prospect: "A robot may not injure humanity, or, through inaction, allow humanity to come to harm." Will machines ever develop intelligence on a level that could challenge humans? While this remains a contentious question, one thing is certain: computing power is set to increase dramatically in coming decades. Moore's Law, which states that processing power will double every 18 months, is set to continue for at least the next ten years, and quantum computers, though poorly understood at present, promise to add new tools to AI that may bypass some of the restrictions in conventional computing. What was once the realm of science fiction has mutated into serious debate. While the focus is currently on cloning and genetic engineering, few people have seriously considered being annihilated by a robot race. That is until an article published in Wired magazine early last year titled Why the future doesn't need us by cofounder of Sun Microsystems and esteemed technologist Bill Joy introduced a wider audience to the possibility that recent technological advances could be a threat to the existence of man. Joy discussed the potential catastrophes that could result from tinkering with genetics, nanotechnology and artificially-intelligent machines. Most disturbingly, Joy cites not technophobes or paranoid theorists, but some of the leading lights of AI research and academia who have voiced concern that machines might confront humans. Steve Grand, artificial intelligence researcher and author of Creation: Life and how to make it says it would be impossible for humans to be totally sure that autonomous, intelligent machines would not threaten humans. Perhaps more worryingly, he claims it would be futile to try to build Asimov's laws into a robot. Artificial intelligence researchers have long since abandoned hope of applying simplistic laws to protect humans from robots. Grand says that for real intelligence to develop, machines must have a degree of independence and be able to weigh up contradictions for themselves, breaking one rule to preserve another, which would not fit with Asimov's laws. He believes that conventional evolutionary pressures would determine whether machines become a threat to humans. They will only become dangerous if they are competing for survival, in terms of resources for example, and can match human's intellectual evolutionary prowess. "Whether they are a threat rests on whether they are going to be smarter than us," he says. "The way I see it, we're just adding a couple more species." worldnetdaily.com