"The Singularity Summit: AI and the Future of Humanity" recently brought together some of the most brilliant, tech savvy and innovative minds the world has ever known. Hundreds of techies and scientists came together to discuss a future of self-programming computers, brain implants that allow humans to think at computer like speed, and the possibility of humans and machines literally becoming one. Sound strange? It is.
"We and our world won't be us anymore," Rodney Brooks, a robotics professor at the Massachusetts Institute of Technology, told the audience. When it comes to computers, he said, "who is us and who is them is going to become a different sort of question."
Similar to how at the center of a black hole there is a theoretical point called a singularity where the laws of physics no longer apply, many of the futurists gathered for the weekend conference believe that information technology is hurtling toward a point where possibilities surpass our current ability to comprehend. They say that once machines truly become smarter than their creator, it will irreversibly alter what it means to be human.
The idea is that since intelligence is the foundation of human technology and all technology is ultimately the product of our intelligence—it stands to reason that technology can then turn around and enhance intelligence, thereby “closing the loop” and creating a positive feedback effect. According to these futurists, these improved, “smarter” minds will be more effective at building still smarter minds. They point out how this “loop” is self evident in the example of an Artificial Intelligence improving its own source code.
One of the themes of the summit was an idea that The Daily Galaxy has also espoused; there is a serious need to develop ethical guidelines for ensuring AI advances are intended to help rather than harm humans. According to artificial intelligence researchers, we can’t afford to wait until the technology is readily available to decide what’s within ethical boundaries, because by then it may well be too late to set enforceable standards.
Eliezer Yudkowsky, co-founder of the Palo Alto-based Singularity Institute for Artificial Intelligence, which organized the summit, researches on the development of so-called "friendly artificial intelligence." His greatest fear, he said, is that a brilliant inventor creates a self-improving but amoral artificial intelligence that turns hostile. It may sound like science fiction, but that’s exactly what has made it such a good story in the past—it’s utterly believable and very possible. But for the most part, futurists see the advancement of AI singularity as harmonious and symbiotic with human advancement.
The first use of the term "singularity" to describe this kind of fundamental technological transformation is credited to Vernor Vinge, a California mathematician and science-fiction author. High-tech entrepreneur Ray Kurzweil made the term popular in his 2005 book "The Singularity is Near," in which he argues that the exponential pace of technological progress makes the emergence of smarter-than-human intelligence the future's sole logical outcome.
Kurzweil, director of the Singularity Institute, is so confident in his predictions of the singularity that he has even set a date: 2029. In fact, many "singularists" can’t imagine any conceivable alternative, citing the exciting and dramatic advances in computing technology that have already occurred in just the last half century.
It wouldn’t be the first time a techie was spot on. In 1965, Intel co-founder Gordon Moore accurately predicted that the number of transistors on a chip should double about every two years. Humans, by comparison, evolve at a snail’s pace. According to the Singularity Institute researchers, the entire evolution of modern humans from primates has resulted in only a threefold increase in brain capacity. That’s not great results when compared with the exponential growth of computers. However, with advances in biotechnology and information technology, they say, there's no scientific reason why human thinking couldn't be pushed to speeds up to a million times faster.
But plenty of critics have scorned the futurists idea of "techno-salvation" or a potential "techno-holocaust". They point out that technology is still far from anything they’re describing. For them these ideas will never be more than fiction. But advocates argue it is plain irresponsible to ignore the possibilities with a “wait and see” attitude. Autonomous robots are already getting close to making life and death “decisions” on the battlefield.
"Technology is heading here. It will predictably get to the point of
making artificial intelligence," Yudkowsky said. "The mere fact that
you cannot predict exactly when it will happen down to the day is
no excuse for closing your eyes and refusing to think about it."
Posted by Rebecca Sato
Related Galaxy Posts:
Robot Evolution: A Parallel to the Origins of Life
1st Robot Artificial Brain Built
“What is Life?” A New Breed of Robots Are Causing Scientists to Question
Transformers -The Movie & Evolution of Machine Intelligence
Virtual Immortality -How To Live Forever
Robots Rising -Scientists are Worried
The Cognitive Revolution: Integrating Computers & the Human Mind