Clark, Re: "Maybe, maybe not, but it is immaterial from a danger to humans standpoint."
I think it is material, the crux of the matter in fact. "If the machines are sufficiently complicated it will become difficult, if not impossible, to control them"
Why? Creating uncontrollable machines isn't a laudable goal. Neural nets would only be useful if they are in control, i.e. they do what WE see as useful. Until they are in our control, I think they will have limited value to us.
"Already it is true that many neural nets in use work in ways we don't really understand (i.e. we don't really know when they will fail, how well they work on new problems, or how well they will work in unforseen situations), and we have computers writing their own code."
If the computer learns and writes code that doesn't do what is appropriate I don't believe we'd use it- or not for long.
Any system that fails might be a danger, so reliability will be a paramount concern...hence I can't wait to replace windows with something that doesn't crash.
It's easy to envision the appearance of self awareness coming about. But if data never truly becomes self aware- i.e. it never sets out to deceive- we will be in no danger save from failures within systems that are ordinarily well under control. Hence, the probability of danger should be no greater than now. In fact, I think continued improvements in controlling system failures is inevitable, so that even with a greater reliance on bits and bytes in the future, failures may become almost non-existent for practical purposes.
Dan |