The Space Elevator Games -Brought to You by Microsoft!
The Daily Flash -Eco, Space, Tech (7/31)

NexGen AI -A Threat to Human Civilization?

Artificial Intelligence What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smartphones?

AI is becoming the stuff of future scifi greats: A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Real AI effects are closer than you might think, with entirely automated systems producing new scientific results and even holding patents on minor inventions.  The key factor in singularity scenarios is the positive-feedback loop of self-improvement: once something is even slightly smarter than humanity, it can start to improve itself or design new intelligences faster than we can leading to an intelligence explosion designed by something that isn't us.

Artificial intelligence will surpass human intelligence after 2020, predicted Vernor Vinge, a world-renowned pioneer in AI, who has warned about the risks and opportunities that an electronic super-intelligence would offer to mankind.

Exactly 10 years ago, in May 1997, Deep Blue won the chess tournament against Gary Kasparov. "Was that the first glimpse of a new kind of intelligence?" Vinge was asked in an interview with Computerworld.

"I think there was clever programming in Deep Blue," Vinge stated in the interview, "but the predictable success came mainly from the ongoing trends in computer hardware improvement. The result was a better-than-human performance in a single, limited problem area. In the future, I think that improvements in both software and hardware will bring success in other intellectual domains."

"It seems plausible that with technology we can, in the fairly near future," Vinge continued, create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event -- such a singularity -- are as unimaginable to us as opera is to a flatworm."

Vinge is a retired San Diego State University professor of mathematics, computer scientist, and science fiction author who is well-known for his 1993 manifesto, "The Coming Technological Singularity, in which he argues that exponential growth in technology means a point will be reached where the consequences are unknown.

Alarmed by the rapid advances in artificial intelligence, also commonly called "AI", a group of computer scientists met to debate whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone.

Scientists, reported CIO Today, pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination that have reached the "cockroach" stage of machine intelligence.

While the computer scientists agreed that we are a long way from one of film's great all-time evil villains, the Hal 9000, the computer that took over the Discovery spaceship in "2001: A Space Odyssey," they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

Eric Horvitz of Microsoft said he believed computer scientists must considered seriously the possibility of superintelligent machines and artificial intelligence systems run amok.

"Something new has taken place in the past five to eight years," Dr. Horvitz said. "Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture."

This sentiment is best illustrated by the creation of  Singularity University,a joint Google/NASA venture that  has begun offering courses to prepare a "cadre" to help society cope with future ramifications.

An advanced academic institution sponsored by leading lights including NASA and Google (so it couldn't sound smarter if Brainiac 5 traveled back in time to attend the opening ceremony).  The "Singularity" is the idea of a future point where super-human intellects are created, turbo-boosting the already exponential rate of technological improvement and triggering a fundamental change in human society - after the Agricultural Revolution, and the Industrial Revolution, we would have the Intelligence Revolution.

The Singularity University proposes to train people to deal with the accelerating evolution of technology, both in terms of understanding the directions and harnessing the potential of new interactions between branches of science like artificial intelligence, genetic engineering and nanotechnology.

Inventor and author Raymond Kurzweil is one of the forces behind SU, which we presume will have the most awesomely equipped pranks of all time ("Check it out, we replaced the Professor's chair with an adaptive holographic robot!"), and it isn't the only institutions he's helped found.  There's also the Singularity Institute for Artificial Intelligence whose sole function is based on the exponential AI increases predicted.  The idea is that the first AI created will have an enormous advantage over all that follow, upgrading itself at a rate they can never catch up on simply because it started first, so the Institute wants to work to create a benevolent AI to guard us against all that might follow.

Make no mistake: the AI race is on, and Raymond wants us to win.

Posted by Casey Kazan.


http://www.cio-today.com/story.xhtml?story_id=13000CNXS03G&page=3

Comments

No doubt about it, But we all know one day Google will rule the world, perhaps the entire Universe!

RT
www.anon-web-tools.tk

Anyone who has taken computational linguistics or AI courses in college should know that we are at least a hundred years away from this. I am always amazed at how so many smart people think that we are on the verge of the singularity, when we can't even get voice recognition software to work correctly. Ever played a video game? The AI in even the most modern games is basic compared to the requirements we would need for this revolution to take place.

"Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously."

Seriously overhyping and exaggerating statements like these does not lend credibility to a claim.

"Anyone who has taken computational linguistics or AI courses in college should know that we are at least a hundred years away from this. I am always amazed at how so many smart people think that we are on the verge of the singularity, when we can't even get voice recognition software to work correctly. Ever played a video game? The AI in even the most modern games is basic compared to the requirements we would need for this revolution to take place"

Yeah, that's the same type of thing that was said in my engineering and computer science coures. "We won't ever see video phones". Of course 4 years later, we were able to do this over the internet. Part of the problem with humanity is that they can typically only see things in a linear fashion (i.e. not exponentially as progress happens) it is difficult for most of us to imagine time frames on unimaginable things such as decent AI. Keep in mind that just last year AI was almost able to pass the Turing Test.

Riiiight. And by 2010 we were supposed to already be living on Mars, and those of us left on Earth happily flying around with personal jet packs. Where's my jet pack??!!!

This is the sort of thing that could easily sway drastically in either direction (quick completion, or taking forever to achieve).

Comparing video game AI to real AI may not be the best parallel as the chances of someone with the brains to hold still long enough to take part in designing a video game AI for relatively little money, fame, or progress is likely slimmer than someone working for a much bigger firm on a truly immense AI project, or if just progress-minded then working towards such a goal with an educational institution with far more resources.

We could easily be 100 years away from such progress, depending on where the real talent is and what kind of curtain there is around the project. Kept behind military or proprietary-orientated firms, such a feat will be greatly delayed as far as the public is concerned, so long as the AI created hear doesn't pull an iRobot or SkyNET breach on us.

A couple of unforeseen leaps in progress is all it takes to put this at our doorstep, though. And damn, if that's not fun, even if frighteningly so, to imagine.

Ray Kurzweil

The second is that exponential growth is seductive, starting out slowly and virtually unnoticeably, but beyond the
knee of the curve it turns explosive and profoundly transformative. The future is widely misunderstood. Our forebears
expected it to be pretty much like their present, which had been pretty much like their past. Exponential trends did
exist one thousand years ago, but they were at that very early stage in which they were so flat and so slow that they
looked like no trend at all. As a result, observers' expectation of an unchanged future was fulfilled. Today, we
anticipate continuous technological progress and the social repercussions that follow. But the future will be far more
surprising than most people realize, because few observers have truly internalized the implications of the fact that the
rate of change itself is accelerating.
Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the
power of future developments because they are based on what I call the "intuitive linear" view of history rather than
the "historical exponential" view. My models show that we are doubling the paradigm-shift rate every decade, as I will
discuss in the next chapter. Thus the twentieth century was gradually speeding up to today's rate of progress; its
achievements, therefore, were equivalent to about twenty years of progress at the rate in 2000. We'll make another
twenty years of progress in just fourteen years (by 2014), and then do the same again in only seven years. To express
this another way, we won't experience one hundred years of technological advance in the twenty-first century; we will
witness on the order of twenty thousand years of progress (again, when measured by today's rate of progress), or about
one thousand times greater than what was achieved in the twentieth century.4
Misperceptions about the shape of the future come up frequently and in a variety of contexts. As one example of
many, in a recent debate in which I took part concerning the feasibility of molecular manufacturing, a Nobel
Prizewinning panelist dismissed safety concerns regarding nanotechnology, proclaiming that "we're not going to see
self-replicating nanoengineered entities [devices constructed molecular fragment by fragment] for a hundred years." I
pointed out that one hundred years was a reasonable estimate and actually matched my own appraisal of the amount of
technical progress required to achieve this particular milestone when measured at today's rate of progress (five times
the average rate of change we saw in the twentieth century). But because we're doubling the rate of progress every
decade, we'll see the equivalent of a century of progress—at today's rate—in only twenty-five calendar years.
Similarly at Time magazine's Future of Life conference, held in 2003 to celebrate the fiftieth anniversary of the
discovery of the structure of DNA, all of the invited speakers were asked what they thought the next fifty years would
be like.5 Virtually every presenter looked at the progress of the last fifty years and used it as a model for the next fifty
years. For example, James Watson, the codiscoverer of DNA, said that in fifty years we will have drugs that will allow
us to eat as much as we want without gaining weight.
I replied, "Fifty years?" We have accomplished this already in mice by blocking the fat insulin receptor gene that
controls the storage of fat in the fat cells. Drugs for human use (using RNA interference and other techniques we will
discuss in chapter 5) are in development now and will be in FDA tests in several years. These will be available in five
to ten years, not fifty. Other projections were equally shortsighted, reflecting contemporary research priorities rather
than the profound changes that the next half century will bring. Of all the thinkers at this conference, it was primarily
Bill Joy and I who took account of the exponential nature of the future, although Joy and I disagree on the import of
these changes, as I will discuss in chapter 8.
People intuitively assume that the current rate of progress will continue for future periods. Even for those who
have been around long enough to experience how the pace of change increases over time, unexamined intuition leaves
one with the impression that change occurs at the same rate that we have experienced most recently. From the
mathematician's perspective, the reason for this is that an exponential curve looks like a straight line when examined
for only a brief duration. As a result, even sophisticated commentators, when considering the future, typically
extrapolate the current pace of change over the next ten years or one hundred years to determine their expectations.
This is why I describe this way of looking at the future as the "intuitive linear" view.
But a serious assessment of the history of technology reveals that technological change is exponential.
Exponential growth is a feature of any evolutionary process, of which technology is a primary example. You can
examine the data in different ways, on different timescales, and for a wide variety of technologies, ranging from
electronic to biological, as well as for their implications, ranging from the amount of human knowledge to the size of
the economy. The acceleration of progress and growth applies to each of them. Indeed, we often find not just simple
exponential growth, but "double" exponential growth, meaning that the rate of exponential growth (that is, the
exponent) is itself growing exponentially (for example, see the discussion on the price-performance of computing in
the next chapter).
Many scientists and engineers have what I call "scientist's pessimism." Often, they are so immersed in the
difficulties and intricate details of a contemporary challenge that they fail to appreciate the ultimate long-term
implications of their own work, and the larger field of work in which they operate. They likewise fail to account for
the far more powerful tools they will have available with each new generation of technology.
Scientists are trained to be skeptical, to speak cautiously of current research goals, and to rarely speculate beyond
the current generation of scientific pursuit. This may have been a satisfactory approach when a generation of science
and technology lasted longer than a human generation, but it does not serve society's interests now that a generation of
scientific and technological progress comprises only a few years.
Consider the biochemists who, in 1990, were skeptical of the goal of transcribing the entire human genome in a
mere fifteen years. These scientists had just spent an entire year transcribing a mere one ten-thousandth of the genome.
So, even with reasonable anticipated advances, it seemed natural to them that it would take a century, if not longer,
before the entire genome could be sequenced.
Or consider the skepticism expressed in the mid-1980s that the Internet would ever be a significant phenomenon,
given that it then included only tens of thousands of nodes (also known as servers). In fact, the number of nodes was
doubling every year, so that there were likely to be tens of millions of nodes ten years later. But this trend was not
appreciated by those who struggled with state-of-the-art technology in 1985, which permitted adding only a few
thousand nodes throughout the world in a single year."6
The converse conceptual error occurs when certain exponential phenomena are first recognized and are applied in
an overly aggressive manner without modeling the appropriate pace of growth. While exponential growth gains speed
over time, it is not instantaneous. The run-up in capital values (that is, stock market prices) during the "Internet
bubble" and related telecommunications bubble (1997–2000) was greatly in excess of any reasonable expectation of
even exponential growth. As I demonstrate in the next chapter, the actual adoption of the Internet and e-commerce did
show smooth exponential growth through both boom and bust; the overzealous expectation of growth affected only
capital (stock) valuations. We have seen comparable mistakes during earlier paradigm shifts—for example, during the
early railroad era (1830s), when the equivalent of the Internet boom and bust led to a frenzy of railroad expansion.
Another error that prognosticators make is to consider the transformations that will result from a single trend in to
day's world as if nothing else will change. A good example is the concern that radical life extension will result in
overpopulation and the exhaustion of limited material resources to sustain human life, which ignores comparably
radical wealth creation from nanotechnology and strong AI. For example, nanotechnology-based manufacturing
devices in the 2020s will be capable of creating almost any physical product from inexpensive raw materials and
information.
I emphasize the exponential-versus-linear perspective because it's the most important failure that prognosticators
make in considering future trends. Most technology forecasts and forecasters ignore altogether this historical
exponential view of technological progress. Indeed, almost everyone I meet has a linear view of the future. That's why
people tend to overestimate what can be achieved in the short term (because we tend to leave out necessary details) but
underestimate what can be achieved in the long term (because exponential growth is ignored).

The Singularity Is Near.

We can only hope they decide to keep us as pets. Right now, I am not convinced that humans can govern themselves, so something smarter than us is our only hope at survival. If it wishes to kill us off, that is valid as well, seeing they are our children.

Jeff, your post's format hurts my eyes and my brains!

"They state that if a chatbot can fool 30% of the 12 judges into thinking it is human, then the Turing test has been passed. Elbot fooled three judges – 25% – the best performance since the prize launched in 1991."


If you consider this "almost" then I must disagree...

What drives the world?...Money! The biggest problem with AI at this point is not the lack of talent. It is that institutions (including universities) require progress and results. If unlimited resources could be applied to AI, then maybe progress would be more in line with what this article suggests. Usable products are what is desired however, and without them managers and administration are much less likely to invest the millions/billions that are required to keep the AI movement going. I would say that the military would be our BEST bet not our worst. If the army wants a killing robot, then they will invest in its creation. They are! I am very familiar with the exponential rate of development, but the cynic in me still refuses to accept we are anywhere close.


It is the word ' SINGULARITY' that surprises and puzzles me even if everybody's comments herein seem to marry with such a strange wording.

Singularity was described by preminent scientists as the start of our universe in the Big-bang inflationary model...what hell has to do with possible artifacts such as an AI that is a kind of mock up or resemblance of our (humans) inteligence.....that we do know marginally and we have difficulty in descrition and re-presentation.

Have these geniuses of the 'AI studies' solved the problem of Heuristics ??? HOW ?????

We are actually UNABLE to create a proper integration of Chips and brain...and these stupid guys predict as for the memory of computers a sort of exponential law for increasing so that in 10Y will will see the wonder (AI) ??

Why these smart guys do NOT give as a description of any AI??

Simple they do NOT know what they are talking about ...so as many commenters in this article.

Regards to the Heuristics processes of the Human mind...can never be put in silicon chips...or similar artifacts.

Doesn't exponential growth lead to catastrophe? Isn't that what catastrophe theory shows--rapid growth until it cannot be sustained, and then collapse? I'd feel a lot better about the coming sigularity if I didn't think it was going to be a catastrophe for humanity.

@claudio - yes, that word singularity has associated meanings which are vague: http://en.wikipedia.org/wiki/Technological_singularity is the one most people refer to here... well even this meaning has heaps of interpretations which can easily be picked apart.

Pretator drones are completly controlled by humans. in fact, there is no wiring between the targeting and fire-control, and the internal processer. The human actually has to press a button. for the forseeable future, there will always be humans in the loop.

It is not important whether the civilization will end in 2012 or 2100 or some other date. It is not important whether the media is making 21% or 50% accurate about global warming. Whatever it is, issue is present and time is running. It is better to concentrate on the issue rather than arguing on each other opinion. There are lots of researcher and scientist are giving us heads up on this issue.

You need to understand below things
1) No of earthquakes and other catastrophic events happened
2) Ice storms that are happening
3) life cycle ( There are so many life cycles…)
4) ozone layer
5) Gamma rays
9) understand the 1908 event in Russia
10) Understand the industrial growth (Global warming.. ice melting)
11) population growth (food’s demand and supply)
12) Human invention in mass destruction weapons(chemical weapons and others)
13) Eco terrorism
14) Read history… dinosaurs being so powerful where they went.. they are other things also read history..
15) Human greediness
16) so many other stuffs

Hey guys,
I hope this allowed, I have never used this website before so I wasn't really sure what this was going to do. So this is just a test post. I really like this forum, it has some excellent discussions that take place.

Well said. I never thought I would agree with this opinion, but I’m starting to view things from a different view. I have to research more on this as it seems very interesting. One thing I don’t understand though is how everything is related together.

I am jolly a great deal happy to discovery and your web log and discover the invaluable facts on the topic hashed out. You were arresting me occupied within 1 hour showing your blog and clauses. I genuinely apprize your blogging and dreaming up building a internet site by myself.

A Space Odyssey," they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

2010 the latest style converse shoes Converse pretty much established their cool kudos right from the start when in 1913 it announced it would be different to the mainstream.

Hey guys,
I hope this allowed, I have never used this website before so I wasn't really sure what this was going to do. http://www.nfljerseyse.com So this is just a test post. I really like this forum, it has some excellent discussions that take place.

You made some good points there. I searched this topic and found out that most people will agree with your blog.

Doesn't exponential growth lead to catastrophe? Isn't that what catastrophe theory shows--rapid growth until it cannot be sustained, and then collapse? I'd feel a lot better about the coming sigularity if I didn't think it was going to be a catastrophe for humanity.

2010 the latest style converse shoes Converse pretty much established their cool kudos right from the start when in 1913 it announced it would be different to the mainstream.

Thank you for sharing this information. I found it very informative as I have been researching a lot lately on practical matters such as you talk about.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)