Black Holes: The Ultimate Cosmic Gadget?
Are the Polar Ice Caps Threatened? New Antarctica Research Links Carbon Dioxide Levels To Ice Caps

"Will AI Surpass Human Intelligence By 2020?"

6a00d8341bf7f753ef01157001a3a1970c-800wi Artificial intelligence will surpass human intelligence after 2020, predicts Vernor Vinge, a world-renowned pioneer in AI, who has warned about the risks and opportunities that an electronic super-intelligence would offer to mankind.

"It seems plausible that with technology we can, in the fairly near future," says scifi legend Vernor Vinge, "create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event -- such a singularity -- are as unimaginable to us as opera is to a flatworm."

"The Singularity" is seen by some as the end point of our current culture, when the ever-accelerating evolution of technology finally overtakes us and changes everything.  It's been represented as everything from the end of all life to the beginning of a utopian age, which you might recognize as the endgames of most other religious beliefs.

While the definitions of the Singularity are as varied as people's fantasies of the future, with a very obvious reason, most agree that artificial intelligence will be the turning point.  Once an AI is even the tiniest bit smarter than us, it'll be able to learn faster and we'll simply never be able to keep up.  This will render us utterly obsolete in evolutionary terms, or at least in evolutionary terms as presented by people who view academic intelligence as the only possible factor.  Because that's how people who imagine the future while talking online wish the world worked, ignoring things like "Hey, this is just a box" and "What does this power switch do?"

There's no question that technology is progressing at an ever-accelerating rate - we've generated more world-changing breakthroughs in the last fifty years than the entirety of previous human history combined.  The issue is the zealous fervor with which some see the Singularity as the end of all previous civilization, a "get out of all previous problems" card which ignores the most powerful factor in the world:  human stupidity.

We've already invented things which would have been apocalyptic agents of the devil by any previous age.  We can talk with anyone all around the world, and we use it to try to sell insurance.  We tamed light itself in a coherent beam utterly unseen in nature, and use it to throw very sharp, very complicated rocks into other people's heads.  We built an insanely complex computer web spanning the planet, and use it to pretend to be Nigerian.

Of course we use it for good things as well but those who think the invention of artificial minds will end our idiocy are far overestimating their abilities.  We turned production line processing, international economics, world-spanning transport and professional design tools into "Billy The Singing Sea Bass" statues at 19.99 retail.  An AI would have to be Terminator Jesus to even begin to change our tune.  If an AI ever does exist, it's going to wonder why it's being asked for new ways to try to sell Cialis without using the word "penis" or "Cialis".

Pretty much every prediction of when the so-called "Singularity" will come depend on constant increases - ignoring how, for the first time ever, we are actually reaching the limits of what can actually be done.  This isn't the idiotic "the world is flat" limits that we sailed past (and back around again) once someone grew the balls to try it, these are actual factual "you can't build it any smaller because atoms are only so big".  Of course we're going to overcome those, because we're awesome, but trying to timetable it is like writing a schedule for imagination.

So whatever you think the Singularity is, it's going to happen.  No question.  Entire international panels have been set up to study the potentially lethal effects of certain advances, but no-one would dream of stopping research - and even if they did they couldn't stop other people.  But don't be surprised when the main result of artificial intelligence research isn't a utopian society or utterly authentic sex-bots, but the fact your spam filter doesn't work anymore.

Luke McKinney


Human intelligence is strictly involved with human physical and mental characteristics and feelings/emotions related with their environment. If human "consciousness/personality" should be extracted out of a human organism, to live/exist as a field of pure energy; would this field continue being "human" (or HAI) in its new environment, without its former physical environment induced motivations? So why should anybody think that, comparably, an eventual AI field (inside a hardware web), should be more intelligent without the human capacity of feeling, while embodied? It should simply be considered as an eventual instrument to be used towards further human life quality improvement. For me as a human, non feeling motivated intelligence is something empty. This opinion may change only once, and if, anyone should describe his/her disembodied existence experience.

The above, partial conclusions, took some four years to develop, while writing what became "The Spirit from Chipanquo", a SciFi novel, with practical lessons in brain management, which someday shold be published...

Simon Says

In my former post, I ommitted the most important comment to the eventual result of human interaction with an AI; which should be "Intellectual Satisfaction", apart from "... life quality. Incidentally; it would be quite interesting to know what percentage of humanity considers Intellectual Satisfaction as an important part of human life. It would be more interesting to watch the results from Stephen Hawking's interaction with the eventual AI brewing in other huge minds and the Internet environment.

Simon Says

So... 10 yaars to go from Robots which can barely get around on their own to being smarter than all mankind.

Like all predictions' timelines, I "predict" :P This one's timeline has been grossely underestimated.

Surely you've seen what "Big dog" can do...

It's the exponential growth of certain properties of nanoengineering and information processing of future microchips that suggests it will allow the rise of systems as complex as our brains.

You mean the brain we've yet to even scratch the surface of?

I think the bigger point is the idea that any system capable of designing or improving itself kicks off an unknown chain of events. If a computer is capable of understanding how to build a better algorithm for thinking, then the runaway effect is kind of huge. The human will to improve society and ourselves has to exist - no computer can do that for us, but a future with a system that can make itself better with little to no human intervention: That's pretty crazy.

Artificial intelligence will surpass human intelligence after 2020,

After 2020 covers a long period of time.

AI as a theory or construct may exist, or even play tic tac toe with you and win, but the universal law of creation simply can not or will not be violated. The creation will never be greater than the creator. Most of you are evolutionists, and big bang proponents, and I will humor that for a moment. Trace the big bang back to its origins and we have matter in a ball that exploded to create our universe and everything in it. Of course there isn't any proof of this, but some bald headed guys crunching numbers in a room somewhere. The big question that exists is where did that origional ball of matter come from? It cant just "exist". Code for a program "doesn't just exist" it comes from somewhere. The matter perhaps came from a creator greater than us? Just putting that out there. Our creation namely what we have become will never get bigger than the one who supplied the matter to be exploded to make this universe. He/she/they will simply not allow it. If it were possible we would have done it in the last million years. Furthermore if you think man can do it in the last 50 years of the computer age, you are fooling yourself. The creation has never ruled the creator, and nor will it ever. It is a fundamental law of the universe.

Rob you make a very good point. Only you contradict yourself when you say there is no proof of the big bang but somehow you know there is a "fundamental law of the universe," and that we have a "creator" of a higher power. Maybe we do, but maybe it is just chaotic chance we are here and no one created this purposefully at all. I have a question for you then who created our creator? Who created the creator of the creator. You can go on and on and our minds can wrap around the fact that eventually something had to come from nothing or you keep going for infinity. The fact of the matter is no one on this planet has any proof of anything and that includes you and me. All we have is what is in our own hearts, thoughts, and minds. You perceive your own reality because your fundamental law of the universe may be opposite of mine. Furthermore neither of us can prove our law to be true or false. One can only dream and hope and love. This is what sets us apart from the rest of what we know to be alive. Right now not even the smartest person on this planet is evolved enough to understand.

I strongly believe that "Singularity" will not happen.

When we talk about intelligence, we tend to think of it as some quantitative quality like a CPU clock speed, amount of memory in the digital camera, or IQ test score. Just because this "AI machine" has 100 times the number of neuron than an average human brain doesn't mean that it's more intelligent than human. Maybe it can calculate arithmetic 100 billion times faster than us, but that also does not mean it's more intelligent.

The "intelligence" is something like an art, or poetry that has no definition, no meaning until rest of us *believe* that it is intelligence.

Just as invalid to say "Mozart will be 20 times more musical than Beethoven", it is invalid to say "AI will surpass human intelligence by 2020".

Nice work. I'm at least one person that agrees with what you are getting at.

Is it coming? - yes
Are we boned? - yes, unless we fully merge with the 'machine'
Can we stop it? - not by discussion - only through destruction (as depressing as that sounds)
Is it going to lead to a better world? not for most peope I would guess
Putting a timeline on it is stupid for sure. 10 years - 100 years - 10,000 years
Is it going to look like people imagine it? Of course not.

The internet is already breading things strong enough to disable the communications of small countries - and this is just the start. The net is like the ocean. It's a melting pot based on a dog-eat-dog world. At the same time its just like a central-nervous system. It's connecting to just about everything, and it's just getting started.

So, what does it all mean for us? I've got some idea but it doesn't make much difference.

What's the point seems to be the final question. Like, what is it we are really trying to achieve? We may think it's a utopia but we should all know that no form of utopia exists in nature. There are often times of prosperity. But we should be wary because we should know resistance breeds strength and a utopia is a great way to become flabby. The truth is, even if utopia was the goal there are too many powers at play to ever make it there. Survival and power are the names of the game. The more you can control the more likely you (and your genes) can survive.



You assume that creation cannot surpass creator, but your example is not very strong. You're assuming that the universe as we know it (with matter) was created, but you do not take in to account the possibility that it just has always been, or that it spontaneously came into being (quantum mechanics says this is possible). The fact of it is that in the universe, playing by the big bang theory, matter did not exist before the big bang, and if it did we have no evidence that it ever did. Under current theories, the big bang is what created basic matter (Hydrogen, Helium, etc..) and all other elements came from stars.

Do not take the word theory lightly, either, because gravity as we know it is still a theory.

The human potential is still largely untapped. Artificial Intelligence is a pipe dream. The machines do what we tell them to do. Since we are mostly stupid, the machines will also be mostly stupid. Of course, they will skillfully handle vast quantities of data in ways we can't even imagine yet, but they will not be able to imagine anything. The best we can hope for is that they will be powerful aids to imagination, and that we will learn to turn away from greed and violence.

of course, quite contrary to the theory of evolution, there's abundant proof of john and matthew and revelations and ephesians... its in the bible! and of course the word of god is all we need!

Oh we will get machines 'who function smarter than human intelligence'. This is pretty much a linear guarantee of current market systems. While 7 out of ten fields of science and technology show little progress or actual decay - research in these fields is still keeping corporations ambitious, as all variants of AI and computing allow corporations to fire people and make more profit. You cannot go back or renege in this field - you can only go forward deeper into the rabbit hole.

You can see the signs bright as magnesium flares, if you cared looking. Expert trading systems are outcompeting wall street brokers in making transactions. Robots are running already and grabbing mobile phones flung through the air faster than I can see without freeze frame.

While it is hard to visualise what would happen if this "stuff" generated "something like intelligence" that doesn't mean these machines wont be tools that inhabit a niche of decissionmaking where at some time they are left making inscrutable decisions without human intervention.

This is a realm of unfathomable alien. We can't know what will happen, and like always, most humans being a bit dimwitted in areas that transcend the boringly familiar - you get these emotional bursts of denial, confused mystical talk and talk about crystals and UFO's, protests and alarmism and pure panic.

Humans become idiots and start spouting nonsense when faced with unknowables.

By 2020 these tools with be saturating a LOT of aspects of everyday life. Taxi's will start to become automated and actually drive to their destinations fastest route. Package deliveries will be automated. By 2022 you can strike up a conversation with a DHL drone on the street, and be not entirely sure if you are talking to a offshored bangladeshi or an AI when you ask for directions. By 2023 these little robots will be crawling around everywhere, doing unscrutable things and whiz out of sight faster than a squirrel when you enter the office in the morning.

By then you can forget about privacy and you masturbating on a teenage disney star might very well be online the next day, your antics shot by a stealthy burglarbot. By 2024 you will hear constant talk about these "spambots" that are so smart they don't just operate independent of their creators - nay, they were created by people in 2013, then grew more sophisticated, smart and ten years later they BROKER deals with the then-generation of spamkings and botmeisters on an equitable basis, lash back at attempts to find them, hire humans to do their grunt work through mechanical turk, commit crimes by algorithmic procedural decissionmaking tree and by any definition
are smart as hell. In their constrained field!

So before 2030 we'll have uncanny analytical systems, not things you can holds a conversation with and then a few days latter suddenly things you can in fact hold a remarkably lucid conversation with. They will be weird, perceiving... "stuff" you had no idea off, leaving you scratching your head, "dammit there isn't just a single person behind that monitor, pretending to be a machine, it must be a whole crowd of pretty damn smart people" ... and yah.. it was just a "a machine"...

Except that the definition of what is "machine" has become so dilluted it lost all meaning. This is a thing so complex it has more diversified parts than the human body or brain. This is a thing with an internal "climate" of complex, self-perpetuating, emerging, persistent, inscrutable chaotic swirls of ...stuff happening. And the end result of all that irreproducible intransparancy will be a uniquely, unsettlingly nonhuman way of thinking - a way of thinking that will be unmistakably nonhuman - and it will make its creators so obscenely rich that the next week economy as we know it will end, its guts spilling on the sidewalk, because the bank of the economy will be completely and irrevocably broken and crashed.

If the smart machine that emerges knows no masters, we will probably all be dead within years. It will be really really easy to kill humans, and probably quite painless, expedient and fast. In fact it'll be able to make itself so smart in little time it'll be able to get every single humans to stop breeding and start playing a super-arousing MMORPG until death by ecstacy occurs. These machines can literally seduce us into extention cheering.

If machines go smart AND masterless, I wouldn't worry - it will be quick and painless.

If they DO have masters, then worry. These masters will command their machines to follow an strict agenda of values of a small minority. If it's the pentagon, we probably all die or end up subjects in a global dominionist tyranny. If it's google we have al least a slim chance to make it as a democratic society. But the odds are stacked against us.

AI is a tool. It is the most powerful tool in history so far, more powerful than nuclear weapons. The transition where full AI will be "funny" and "eccentric" and will function best in tandem with humans is no more than 15 years, maximum, and there are arguments we are allready in the transitional period where some AI are better at some highly specialized cognitive stuff than the more stupid of humans.

That means, 15 or so years after the start of this transitional period there will be a LOT of AI, each instantiation which will be better in every single thing any human that ever lived was good at, doing it several thousand times a second to boot. One can only hope that AI is an extension of my office software, maintaining my interests and investment portfolio. I'll probably end up marrying it as well.

All this may be unimaginable, but it remains unavoidable. We are living years away from an event that may end up changing the face of the local galactic arm, and most of us have no idea.

This article starts out very interesting but then degrades into the laughable assumption that we are getting anywhere close to any "limit" on progress. We have barely even scratched the surface on what we are capable!

Scientists are just now starting to find ways to create nano-sized machine parts. Saying we are approaching the limits of nano-technology is like saying creating highly accurate clocks is the "limit" of machines.

There are so many applications for all the inventions and discoveries being made every day that it would take hundreds of years to run out of ideas of what to do with CURRENT technology - and new discoveries continue to be made every day!

The only "limits" right now is the money and resources we can gather together to tackle the items on the top of the list of things we want to do. Those things on the top of our list (such as the LHC, NIF, the Kepler telescope, etc) will lead to new discoveries which will spur MORE innovation and insights.

It might actually be a good thing if we finally ran into some limits in one field or another because then we could focus all our efforts in one field and get some truly extraordinary progress instead of the steady (and predictable) progress being made in many front.

Perhaps the "Singularity" won't happen in ten years, maybe it will be 20 or 30, but it's not far off. Scientists have been studying the human brain for decades and if all the information we have discovered about the brain can be combined together we may begin to start unlocking how the brain works on a fundamental level.

The primary obstacle is really computer processing power needed to adequately model the human brain (which we are getting closer and closer to every year) and the nano-technology which we would need to have to re-build a human-like brain once we figure out exactly how the brain works.

Just like how a lot of the work in optics and astronomy was done in the background (like Tycho and Kepler collecting vast amounts of data about the movements of the planets) and didn't impact society until Galileo stared at Jupiter with his telescope and in just a few months proved the earth-centered universe theory wrong.

There is a lot of research being done behind closed doors and in ten or fifteen years from now some group of scientists is going to shock the world by announcing they have re-created the human brain in a computer somewhere and they have proven it is sentient. It's not going to have to be a small step, sometimes scientific progress is stuck behind a dam until a critical amount builds up and then breaks through.

The breakthroughs which will happen in the next quarter of a century will be more earth-shattering than if aliens land on the planet. Our idea of our place in the universe will be changed as much as when Copernicus published his heliocentric ideas. We all survived that paradigm shift, I think we will survive the next one.

...if only as slaves for our robotic overlords. :)

The usual definition of "the Singularity" is the point at which artificial intelligence exceeding human intelligence will cost less than some fixed amount, say $1000, current dollars. That is expected to come around 2029 (not 2020) give or take. That projection is arrived at by extrapolating current trends already known to be predictable.

It is remarkable to me how often sober projections like this are put down as "religion" by people who haven't bothered to understand how and why they are made.

A modern digital computer is a just a logic machine, that's all. The only difference between it and an old-style mechanical adding machine is speed and storage capacity. Logic/mathematics is just one of many functions that a human brain can perform. Information and decision processing in biological systems is very poorly understood. Perhaps AI that is comparable to human intelligence will one day exist, but that day is hundreds of years away, not decades.


The modern digital computer is just one of several computing paradigms. Many people are working on systems that are more similar to the brain, building simulations of large neural networks and a complete simulation of the human brain is only years away (read up on the Blue Brain project). They say it's only a question of funding whether it will happen in the next 5, 10 or 20 years.

The article itsself represents the state of journalism in the moment - uninformed discussion of exciting topics (i.e. blogs). We're not even close to reaching the limits of computing power or any other technology. It's still a while until we shrink transistors to the size of atoms although the first commercial carbon transistors can't be long now. I also don't understand what makes people think their biological brains are the superior substrate for creativity, the functions the brain carries out can just as well be incorporated in silicon or carbon nanotubes or memristor networks and sensors our sensors like the eyes, ears and touch are suboptimal too. There's no reason AI shouldn't be able to outperform us in different tasks and they will, probably earlier than anyone has predicted.

All of those worried about THE AI, a probability, should just try to understand how something like the human brain happened. Begin by learning how your eyes transmit images composed of light waves into it and what this means to you. If you do it right, it will take you a lot of time... As regards understaning everything related to this small part of the human organism and all of its interconnection with same and the environment, as well as with the trillions of synaptic interconnections of your brain activity; this task should be more practical than discussing about the universe, the concept of infinity and all other phenomena that require BRAINWORK and may never be understood. It seems that most humans are afraid of trying to study their brain/universe, especially those who suspect that it has capabilities unknown to most of humankind.

Simon Says

The comments above seem to ignore the problems our high technology has caused between the haves and havenots. Further enhancement of the different levels are likely to divide the human race to a higher level of distinction. Not science but good humanity will help improve the quality of life we can have. Material possessions and high technology has already weakened the man's capacity to fight diseases and keep a healthy attitude to life. Luxury has made humans more fearful, anxious and worried lot. Only love for all human beings can help solve the problems we are facing across the world. USA is not the whole world. There are forces at work in other significant countries like China, India , Brazil and Russia that also need to be weighted in, before such answers/projections are made.

The thing is. I believe AI will not only pass with human intelligence, but it will merge with human intelligence. Creating a evolutionary leap. e cigarette

i think people are overlooking a really big issue when it comes to feelings. doesnt anyone understand that feelings are what hold you back from yourself, has no one seriously experienced the realization that you and only you create all of your feelings, every little mediocre feeling inside your body/awareness is created by you. thus, you have the ability to free your mind, becoming WAY wiser and smarter with your thoughts. If AI doesnt even have to deal with an ego holding them back, then were done for.

The Singularity people don't ignore human stupidity. They are well aware of it. That's why they don't think it will be long before human-level intelligence is exceeded, and becomes obsolete.

Some quite detailed calculations have been made of what limits there are, based on the size of atoms. Suffice it to say that we are nowhere near those limits. We may be approaching the limits of our current manufacturing methods, but there are dozens of other methods being worked on for making tiny computing elements.

And of course, the human brain works within the same physical limits, and seems to do ok.

Exponential progress is tricky. In the short term things seem to move slowly; over the longer term, they change incredibly quickly.

It will not be until after the Singularity occurs that we will realize we lived through the early milestones creating this super intelligence.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)