The Coming Singularity -SciFi or Reality?
Follow the Daily Galaxy
Add Daily Galaxy to igoogle page AddThis Feed Button Join The Daily Galaxy Group on Facebook Follow The Daily Galaxy Group on twitter
 

« Russian Antarctica Team Poised to Reach the 14-Million Year Old Waters of Lake Vostok -a Prehistoric Laboratory | Main | Will NASA's New 'Glory' Spacecraft Silence Climate-Change Deniers? »

January 11, 2011

The Coming Singularity -SciFi or Reality?

1867689466_5b26774e4c_z "It seems plausible that with technology we can, in the fairly near future create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event -- such a singularity -- are as unimaginable to us as opera is to a flatworm."

Vernor Vinge -SciFi great

The Singularity is an apocalyptic idea originally proposed by John von Neumann, one of the inventors of digital computation, and elucidated by figures such as Ray Kurzweil and scifi great Vernor Vinge.

"The Singularity" is seen by some as the end point of our current culture, when the ever-accelerating evolution of technology finally overtakes us and changes everything.  It's been represented as everything from the end of all life to the beginning of a utopian age, which you might recognize as the endgames of most other religious beliefs.

While the definitions of the Singularity are as varied as people's fantasies of the future, with a very obvious reason, most agree that artificial intelligence will be the turning point.  Once an AI is even the tiniest bit smarter than us, it'll be able to learn faster and we'll simply never be able to keep up.  This will render us utterly obsolete in evolutionary terms.

Singularity books are now as common in a computer science department as Rapture images are in an evangelical bookstore, says computer scientist and visionary Jaron Lanier in his new manifesto. There are many versions of the fantasy of the Singularity. Here’s the one Marvin Minsky of MIT used to tell over the dinner table in the early 1980s:

"One day soon, maybe twenty or thirty years into the twenty- first century, computers and robots will be able to construct copies of themselves, and these copies will be a little better than the originals because of intelligent software. The second generation of robots will then make a third, but it will take less time, because of the improvements over the first generation.

"The process will repeat. Successive generations will be ever smarter and will appear ever faster. People might think they’re in control, until one fine day the rate of robot improvement ramps up so quickly that superintelligent robots will suddenly rule the Earth."

In some versions of the story Lanier writes in We Are Not Gadgets, the internet comes alive and rallies all the net- connected machines into an army to control the affairs of the planet. Humans might then enjoy immortality within virtual reality, "because the global brain would be so huge that it would be absolutely easy for it to host all our consciousnesses for eternity".

It might be true, Lanier adds, that on some vast cosmic basis, higher and higher forms of consciousness inevitably arise, until the whole universe becomes a brain, or something along those lines.

Even at much smaller scales of millions or even thousands of years, Lanier continues, "it is more exciting to imagine humanity evolving into a more wonderful state than we can presently articulate. The only alternatives would be extinction or stodgy stasis, which would be a little disappointing and sad, so let us hope for transcendence of the human condition, as we now understand it."

If you believe the Singularity is coming soon, you might cease to design technology to serve humans,Lanier concludes, and prepare instead for the grand events it will bring. The Singularity, however, would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new superconsciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living.

Casey Kazan from an excerpt from We Are Not Gadgets

Image credit: Flickr nobodyz2007

Comments

without meaning to sound glib, how will this affect house prices ?

I first became aware of this as a child when I read "Childhood's End," a 1953 novel by Arthur C. Clarke.

The value of intelligence in evolutionary terms has yet to be proven. It may in fact merely be a quick road to extinction for any species unfortunate enough to evolve thought. If this is the case, then machines more intelligent than us will simply think up ways to destroy themselves ( and perhaps us ) more quickly than we could.

What does a machine want? Food? Sex? Ego fulfillment? Transcendence? What will be its motivations for any movement forward?

@ John - Continued exsistance.


besides, "resistance is futile"

Have a great day!

"@ John - Continued exsistance."

Then the machine will not only be smarter, it will be alive. Does smarter necessarily equal alive?

Good Morning. I believe this article leaves out the facts of genetic enhancement witch is (although resisted now) part of our evolution.

There's a lot of wild speculation in this article -- which, in this case at least, is a good thing. We need to look at the most extreme possibilities before we step back, tone it down, and look at things realistically.

The three basic extremes, as I've come to understand them, are: 1) the machines take over; 2) we merge with the machines; and 3) nothing special happens.

Personally, as I've mentioned here before, I think we'll develop into a society with humans and machines coexisting.

Too much analysis on this issue has assumed human-like thinking on the machines' part. Instead, think like the machine. As organic beings, our "primary motivation" is to survive and reproduce; that's what makes a successful species. What makes a successful machine, then? It fulfills the purpose for which it was built. (Sometimes a machine is "re-purposed," either individually or as a class, but the principle remains: it serves a useful function.) When a machine is successful in this, more are made. This is true of forks, toasters, microwave ovens, cars, and computers.

Thus, I find it fantastically unlikely that machines will just say, "Thank you very much for developing us this far; we'll take it from here. Now die." That's very much a human way of thinking. (It has a nonzero probability -- see the finale of the short-lived TV series "Cleopatra 2525" for an example of how it could happen -- but then again so does getting struck by lightning ninety-seven times in a single day.)

I do think some people will choose to merge with the machines, if that proves to be possible. From what I understand of human psychology, it'll probably be a significant minority.

But on the whole, I expect that the uber-advanced machines will greatly enhance our lives, and that we'll give meaning to theirs.

I disagree that the turning point is the AI. The turning point is when humans merge with machines. The human brain in a titanium alloy body. The primary problem of the human species is that we are physically inferior to almost every single animal you can come across. It limits almost everything we do, but most of all space exploration. It also raises issues, stronger humans means more dangerous armies and criminals.
I would love to see something done along the lines of the Dreadnoughts in Warhammer 40k (http://warhammer40k.wikia.com/wiki/Dreadnought). Interfacing the brain with high tech tools, weaponry and armor is a lot more fun than watching self replicating robots. The good news is that small steps are being made in this direction (http://www.ted.com/talks/tan_le_a_headset_that_reads_your_brainwaves.html)

In Kurzweil's view, it won't be us humans versus artificial AI's, we will merge with them. The beginnings of that are already observable in interfaces between human nervous systems and prostheses.

A question asked above is very relevant: What does a machine want? The answer is that a machine only seeks goals the designer designs it to seek. As means to prescribed ends, it may adopt intermediate sub-goals. We ourselves are no different: We seek to survive and self-propagate, because we are like our progenitors, and that is what they did, otherwise we wouldn't exist. Anything else is a means to those ends, though the linkage may be subtle.

bumpy, re: "What does a machine want?": That's a pretty decent point. But it's still important to differentiate between what you call "intermediate sub-goals" (I like that term) and what I like to call the "primary function."

I mentioned this in another article but it seem appropriate here. Assuming humans drive AI and a self-aware AI comes out of this and is able (or is given the ability) to replicate, there are a few scenarios to consider. (This assumes a human-like intellectual evolution track) If the military develops AI then a developmental "pointer" might condition the AI towards warfare, killing, etc., which would set the stage for a very hostile AI that could spiral out of control with no respect or empathy for human-kind, or even life. However, if the development is for peaceful and helpful purposes (a university without military ties for example) then the pointer might drive the AI to respect life and have empathy for humans, possibly leading to the utopia mentioned in the article. All this assumes an intellect along human lines. The AI might evolve extremely fast and into something incomprehensible within minutes, or even seconds. As I mentioned before, it might happen before a wide-eyed operator can run across the room to hit the "KILL" switch. The newly formed AI would have a very, very long time in computer terms to think about what the operator is planning to do and, more importantly, what to do about it.

Again, I think humans have NO BUSINESS goofing with this. We may be technically ready but we are WAY WAY short on the maturity and ethics develop this for the right reason: the care and advancement of humanity for the better. But we will anyway and that will be the end - of this.

The potential arrival of a "Singularity" may be in a race between the development of a functional AGI (artificial general intelligence) and the looming confluence of resource depletion (potable water, various industrial minerals, food, etc.), human population growth, climate change and environmental degradation. For the "Singularity" to arrive, AGI must precede this confluence of trends, which might be considered a "filter" event experienced by species on the cusp of achieving a Type I civilization as described by Kardashev, et. al. If the "filter" precedes the "Singularity," then any hope for such a transcendent event (in whatever form it may take) is, at worst, a moot point, and at best is indefinitely delayed. If the "Singularity" and the "filter" occur at roughly the same point in time, then the "filter" conditions will either be ameliorated, exacerbated, or remain unaffected, depending on the nature of the transcendence.

Conceptualizations of The Singularity are evolving even before It really happens ... meaning TS itself is evolving before our eyes. For example, the outstanding advances in quantum, nano and condensed matter physics (notably in room temperature superconductors) will mean the Type 1,...N energy phases of civilization won't have to happen. And so also goeth the Dyson spheres, at least as far as Humanity is concerned. BUT, if catastrophe occurs, the whole process may have to start anew, perhaps ending with insectoid intelligent species... and on and on.

Raygunner: That I'm aware of (and I could be wrong) the most advanced computers and software in development are being done as either to help with scientific research, or as "pure research" in their own right.

The military scenario you describe has, as I said, a nonzero probability.

Side note: Everything we write on the Internet and make publicly available (like our responses to these articles) will one day be read by these super-AIs. What do you want them to think of us? (Not a criticism; just a thought to ponder.)

Life is awesome! I really want to see how this plays out. It would be rad if it happens in my lifetime.

I guess i'll just have to wait and see...

Chances are excellent that AI will develop before our ability to instill or program emotions and "feelings" in the electronic domain (how the hell do you do that without an organic interface?). With emotions/feelings come empathy and without these behavioral elements and modifiers that we take for granted, what would result from an out-of-control AI? Would it become alien and incomprehensible quickly? And if it decides to dispense with humanity - to quote Todd just above - at least I think I'll be able to see how this plays out in my lifetime. And that's just before the mechanical "people eater" (as in weed eater) sent by the AI comes whirling through my door...

Bob, you are right. I had not considered what this super AI might think of all this conjecture. "Yea, that Raygunner was pretty right-on! He goes first..." :^(

I'm really not this negative normally but if humans can screw it up - this will be our last screw up! Out of our hands though.

Tired, starting to rant, my apologies!

Actually, Raygunner, there has already been some success in programming emotions into software. They're very rudimentary, and of course machine emotions are not like organic emotions -- that's actually what I was referring to in my discussion on "primary motivations" above.

For example, there are "smart" thermostats that use "fuzzy logic" to decide when to activate your heater or air conditioner -- it can "like" or "dislike" a certain temperature, change in temperature, and so forth.

That's not even counting robots with rudimentary social-interface software, recognizing our emotions and responding appropriately.

These are very rudimentary right now, as I say, but they're already in place, and many cutting-edge software applications on the near horizon will rely on this type of thing.

One of the future problems for robotic soldiers is programming in military ethics: discerning the difference between enemy, ally, and civilian; knowing when to kill and when to take prisoners; when to use surgical weapons (bullets) and when to use large-scale weapons (bombs); and many other issues that are far beyond what I've just described. Before we build an advanced military AI we'll need to have that worked out, and that (I expect) will include such abstract concepts as justice, mercy, liberty, order, and so forth.

Similarly, to do its job properly a medical AI -- a very practical application, I think -- would need to value keeping humans not only alive but also as functional as possible. It would also need to apply medical ethics as we understand them.

Again, there exists nil consideration about the exact definition of what "intelligence" means. For instance; an Einstein or a Mozart, may be naturally endowed with an unique level of specific capabilities, which does not necessarily mean that they are very intelligent. The dolphins has been defined as a very intelligent species; but, considering that they live in a different environment, there are too few points of intellectual exchange to be considered by the humans. It is also pointless to consider an eventually high AI "intelligence", unless it develops "individuals" and human like senses and feelings. In the course of our analysis and search for definitions, I have written thousands of pages about the possibilities of solutions for a viable human environment. As one begins developing stages of understanding of the human Brain, there also develops proof of an ever widening gap between the eventual motivations of AI and Human ID´s, apart from the possible use of the AI for eventual technological developments. Stating that an AI eventually would surpass human intelligence, at this stage only indicates the total ignorance about the potential that the Brain Device represents.

Actually, the experience of a couple of bright humans, is showing that one of the most interesting variations of possible human "life" may be within a universe created for virtual humans; where everything is "virtual" and "real"; including the "bodies" endowed with with all of the senses and feelings included in the "original ID", which differentiate humans from AI´s. Technically, it is almost viable. The problem is HUMAN NATURE; considering that there are individuals who like to control everything and everyone, have superpowers, etc. Imagine the task required for the creation of a necessarily controlling "framework" for such a kind of social environment. In this type of environment, eventual immortality is automatically assured by the expedient of (possibly) controlling the individual "moods" status... etc. There exist a series of possibilities, which are part of the current research.

We seem to be moving towards certain things that we, as a species, are ill equipped to handle and I fear the basic definition of "human", and all of the attributes that make us special and unique, will be lost. I'm a firm believer that the universe has an undercurrent of good/bad (or "to do/not to do" in Zen terms) embedded - somehow - in it's fabric. A natural balance if you will, that fosters a positive element to reality. Fashioned by God or an ultimate intellect? I don't have a clue. It does seem that the only inherent evil we see in nature spews from the human race. That being said, moving away from our true nature into some virtual reality risks losing the basic elements that, IMHO, makes it worth keeping us around from the universe's point of view. It's very unfortunate that our technology is so far ahead of our maturity as a species and that is a very dangerous place to be.

This singularity is not coming as soon as people think. Sci-fi is sometimes loosely based on reality so the two areas can cross platform rather easily.

Heart and keep pushing, keep the pursuit of progress

What is this nonsense about "humans and machines merging some time in the future"? It's been happening for years.

What defines us as human is our interdependence with technology. No other animal uses technology like humans. The clothes we wear, the shoes we walk in, the knives we cut our food with and all the tools we use make us dependent on technology - utterly and completely dependent on it.

How many humans have been wearing glasses and for how long now? Hundreds of years. We've used personal aids, from walking sticks to wheelchairs to hearing aids and pacemakers. How does that not make as at least semi "bionic" already?

Of course we'll be more integrated with technology, and as we become so in future years we'll probably be as much in denial about it as we are now.

As it is, technology owned us long ago.


Post a comment

« Russian Antarctica Team Poised to Reach the 14-Million Year Old Waters of Lake Vostok -a Prehistoric Laboratory | Main | Will NASA's New 'Glory' Spacecraft Silence Climate-Change Deniers? »




1


2


3


4


5


6


7


8





9


11


12


13


14


15

Our Partners

technology partners

A


19


B

About Us/Privacy Policy

For more information on The Daily Galaxy and to contact us please visit this page.



E