Dale, I'm your man. What you are talking about is a technological singularity. Or, to be more precise, a runaway AI scenario involving "unfriendly AI". Read Kurzweil, Bostrom, et al.

Quote:

Runaway A.I.
Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong AIs, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI but takes less time than the cycle before it, as is the nature of technological evolution (or any evolutionary process). The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating superintelligence... Superintelligence innately cannot be controlled.
—Raymond Kurzweil (2005)


Quote:
When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.
—Nick Bostrom (2002).


You might start with Wiki. Read Technological Singularity, The Singularity is Near and perhaps even Simulated Reality

Last edited by Hydra; 10/20/07 11:52 AM.