Some pretty quirky ideas were posed at this year’s Singularity Summit, including whether or not advanced AI would want to keep us for pets, or turn all organic matter into “computronium”. One thing is for sure, the 600+ technocrats who attended the recent event at the Palace of Fine Arts, left with their heads full of futuristic ideas (not that they’re weren’t already).
(Image is a hyperturing AI)
One prevalent theme seemed to be that in order to create truly independent AI, we have to first “raise” them like we would children.
"The only pathway is the way we walked ourselves," argued Sam Adams who headed the IBM's Joshua Blue Project, which attempted to create an artificial general intelligence (AGI) with the capabilities of a 3-year old toddler. Before beginning the project, Adams and his collaborators consulted the literature of developmental psychology and developmental neuroscience to model Joshua. Joshua was capable of learning about itself and the virtual environment in which it found itself.
Similarly, Novamente's Ben Goertzel is working to create self-improving AI avatars that can live on their own in virtual worlds like Second Life. They could exist as virtual kids or pets that humans on Second Life could teach and interact with. Eventually their virtual bodies could develop the “senses” that would allow them to explore and become socialized.
But here’s where it gets really weird…unlike real babies, AI “babies” have a potentially unlimited capacity for boosting their own level of intelligence. So what happens if an AI baby becomes super-smart, but has the emotional and moral stability of a small child, who think the world should revolve around them?
James Hughes pointed to the havoc currently carried out by the Storm worm. Storm has now infected over 50 million computers and has at its disposal more computing resources than 500 supercomputers. Perhaps more alarmingly, when Storm detects attempts to stop it, it automatically begins launching massive denial-of-service attacks to defend itself. Hughes also speculated that self-willed minds could evolve from primitive AIs already inhabiting the infosphere's ecosystems.
But the future forecast may be sunny rather than stormy according to the founder of Adaptive A.I., Peter Voss. He outlined several advantages that super smart AIs could offer humanity. For one thing, AIs could significantly lower costs, enable the production of better and safer products and services, and improve the standard of living around the world, including the elimination of poverty in developing nations. Voss asked those at the conference to imagine what could happen when the AIs equivalent of 100,000 Ph.D. scientists were working on life extension and anti-aging research 24/7. Voss believes that AIs could make us better people from a moral perspective. He imagines that each of us could have our own super smart AI assistant to guide us in making smarter choices ourselves.
One underlying theme of the conference was the age old idea that with great risk, comes great reward. Yes AI could be misused or allowed to get out of hand, like most other technologies, but if handled correctly, the technology could potentially make the planet a better place. But it is important that we put measures into place that will ensure AI of the future will be human friendly. But is that even possible?
Computer scientist Stephen Omohundro doesn’t think so. He said that self-improving AIs could easily become ultra-rational economic agents, basically examples of homo economicus. Such AIs would exhibit four drives; efficiency, self-preservation, acquisition, and creativity. The drive to acquire more resources means that AIs could be dangerously competitive with humans (as humans are towards each other) rather than helpful.
Given these grave concerns about how future AIs might treat humans, should we be trying to create them in the first place? Former Sun Microsystems chief scientist Bill Joy declared that they are indeed too dangerous, and that we would be wise to relinquish the drive to create them altogether. But something tells me he wasn’t preaching to the choir on that one.
Charles Harper, senior vice president of the Templeton Foundation, suggested there was a "dilemma of power." The dilemma is that "our science and technology create new forms of power but our cultures and civilizations do not easily create parallel capacities of stewardship required to utilize newly created technological powers for benevolent uses and to restrain them from malevolent uses."
However, not everyone is so pessimistic about our capacity to improve. Science writer Ronald Bailey, who wrote The Scientific and Moral Case for the Biotech Revolution points out, “Actually the arc of modern history strongly suggests that Harper's claim is wrong. More people than ever are wealthier and more educated and freer. Despite the tremendous toll of the 20th century, even social levels of violence per capita have been decreasing. We have been doing something more right than wrong as our technical powers have burgeoned.”
Regardless of who’s right, the likeliest outcome is that some sort of independent AI will eventually arise. Ray Kurzweil, who joined the Summit via video conferencing, predicted that AIs will come into existence before 2030. Peter Voss was even bolder in his declaration. "In my opinion AIs will be developed almost certainly in less than 10 years and quite likely in less than five years."
I guess we won’t know for certain until 2017. In the mean time, I’m sure most of us are happy to support the Singularity Institute's efforts to ensure friendly AI. If I’m going to be some computer’s pet—I’ll at least want it to treat me…humanely?
Posted by Rebecca Sato
Related Galaxy posts:
A Post-Human Future -A Galaxy Insight
AI Singularity: The Next Stage of Human Evolution?
Dr Strangelove Two? -Cambridge Astrophysicist Gives Earthlings a 50/50 Chance of Making it Through the Century
Robot Evolution: A Parallel to the Origins of Life
Robots Rising -Scientists are Worried
Virtual Immortality -How To Live Forever
DepthX -Thinking Robot to Explore Jupiter's Moon, Europa
What do Robots Dream of?
Scientists Create Artificial Brain
Related blog posts:
TrackBack URL for this entry:
Listed below are links to weblogs that reference “Human Friendly” Artifical Intelligence Might Keep Us Around As Pets—if We’re Lucky: