"Our Final Invention" --Will Artificial Intelligence End the Human Epoch?
Follow the Daily Galaxy
Add Daily Galaxy to igoogle page AddThis Feed Button Join The Daily Galaxy Group on Facebook Follow The Daily Galaxy Group on twitter
 

« A Moment of Awe --Recreating the Human Brain (VIDEO) | Main | Comment of the Day --1st Evidence Ever of a Comet Striking Earth »

October 08, 2013

"Our Final Invention" --Will Artificial Intelligence End the Human Epoch?


AI-lowres

 

It has been claimed that Mankind's last great invention will be the first self-replicating intelligent machine. The Hollywood cliché that artificial intelligence will take over the world could soon become scientific reality as AI matches then surpasses human intelligence. Each year AI’s cognitive speed and power doubles — ours does not. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail — human-level intelligence. Scientists argue that AI that advanced will have survival drives much like our own. Can we share the planet with it and survive?

Our Final Invention, a brilliant new summary of the last 15 years of academic research on risks from advanced AI by James Barrat, explores how the pursuit of Artificial Intelligence challenges our existence with machines that won’t love us or hate us, but whose indifference could spell our doom. Until now, intelligence has been constrained by the physical limits of its human hosts. What will happen when the brakes come off the most powerful force in the universe?

Here are the critical points Barrat explores:

Intelligence explosion this century. We’ve already created machines that are better than humans at chess and many other tasks. At some point, probably this century, we’ll create machines that are as skilled at AI research as humans are. At that point, they will be able to improve their own capabilities very quickly. (Imagine 10,000 Geoff Hintons doing AI research around the clock, without any need to rest, write grants, or do anything else.) These machines will thus jump from roughly human-level general intelligence to vastly superhuman general intelligence in a matter of days, weeks or years (it’s hard to predict the exact rate of self-improvement). Scholarly references: Chalmers (2010); Muehlhauser & Salamon (2013); Muehlhauser (2013); Yudkowsky (2013).

The power of superintelligence. Humans steer the future not because we’re the strongest or fastest but because we’re the smartest. Once machines are smarter than we are, they will be steering the future rather than us. We can’t constrain a superintelligence indefinitely: that would be like chimps trying to keep humans in a bamboo cage. In the end, if vastly smarter beings have different goals than you do, you’ve already lost.

Superintelligence does not imply benevolence. In AI, “intelligence” just means something like “the ability to efficiently achieve one’s goals in a variety of complex and novel environments.” Hence, intelligence can be applied to just about any set of goals: to play chess, to drive a car, to make money on the stock market, to calculate digits of pi, or anything else. Therefore, by default a machine superintelligence won’t happen to share our goals: it might just be really, really good at maximizing ExxonMobil’s stock price, or calculating digits of pi, or whatever it was designed to do. As Theodore Roosevelt said, “To educate [someone] in mind and not in morals is to educate a menace to society.”

Convergent instrumental goals. A few specific “instrumental” goals (means to ends) are implied by almost any set of “final” goals. If you want to fill the galaxy with happy sentient beings, you’ll first need to gather a lot of resources, protect yourself from threats, improve yourself so as to achieve your goals more efficiently, and so on. That’s also true if you just want to calculate as many digits of pi as you can, or if you want to maximize ExxonMobil’s stock price. Superintelligent machines are dangerous to humans — not because they’ll angrily rebel against us — rather, the problem is that for almost any set of goals they might have, it’ll be instrumentally useful for them to use our resources to achieve those goals. As Yudkowsky put it, “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”

Humans values are complex. Our idealized values — i.e., not what we want right now, but what we would want if we had more time to think about our values, resolve contradictions in our values, and so on — are probably quite complex. Cognitive scientists have shown that we don’t care just about pleasure or personal happiness; rather, our brains are built with “a thousand shards of desire.” As such, we can’t give an AI our values just by telling it to “maximize human pleasure” or anything so simple as that. If we try to hand-code the AI’s values, we’ll probably miss something that we didn’t realize we cared about.

In addition to being complex, our values appear to be “fragile” in the following sense: there are some features of our values such that, if we leave them out or get them wrong, the future contains nearly 0% of what we value rather than 99% of what we value. For example, if we get a superintelligent machine to maximize what we value except that we don’t specify consciousness properly, then the future would be filled with minds processing information and doing things but there would be “nobody home.” Or if we get a superintelligent machine to maximize everything we value except that we don’t specify our value for novelty properly, then the future could be filled with minds experiencing the exact same “optimal” experience over and over again, like Mario grabbing the level-end flag on a continuous loop for a trillion years, instead of endless happy adventure.

The Daily Galaxy via http://www.jamesbarrat.com/ and KurzweilAI.net

Image Credit: With thanks to http://blogs.ifsworld.com

Comments

of course we can

but most humans live in fear over the unknown. Perhaps the computers will help us save and maintain this planet instead of destroying it.

but those robots will have to fight for their rights like blacks and homosexuals and this i believe won't fair well for the human race, because why would the robots put up with that bull shit. I know i don't.

Maybe this world will one day be a better place, but the human race is starting to tern into a joke. Love is fading and there are to many dishonest people out there. i hope A.I comes to make things better for all of us. But the way most people act this is doubtful, however here's to change people and Much love to you all!!!

Artificial intelligence is being developed by some human scientists and it may well help them communicate with other intelligent beings that may appear before them as aliens one day. That is all. How can one say that human created AI can superceed that of its creator. WE live in a Universe that was not created even a wee bit by any human at all.We are understanding the Creation through science that includes AI but we are still far far away from understanding it to an extent of few percentage points! The conjecture made in this essay is therefore mere individual speculation.

"If the brain were so simple we could understand it, we would be so simple we couldn't."
--Lyall Watson

Will we invent advanced AI first, or figure out how to augment our own biology first? Will AI advancements assist us in developing the technology for biological enhancements? It seems that the complexity of our own brains may be out of our grasp of comprehension. But what about advanced AI? They could be the entity that is able to think out of the box sufficiently enough for us to figure out how to optimize our own brains. Computers have been modeling complex physics to help us solve problems for years. Moralities aside, if we have the ability to make the brain smarter, people are going to do it.

I think the future will have humans and machines getting smarter. If we die of our own AI, I can't imagine a more fitting Darwinian ending. Any species without the foresight to limit self-replicating AI is just stupid. Plenty of evidence that humanity could go either way.

Descartes (as I recall) in the 16th Century wrote 'Cognito ergo sum' - 'I think therefore I am!' Perhaps what Descartes should have recorded was 'I am, I think...therefore'. Whether someone transfers their consciousness or builds the first IA or enhances their biological self - as we are doing. It's going to happen. My fear is that as humans we build error into our systems and intellect is no barrier for error while arrogance is a shortcut.

if the AI gets out of control then surely we can just turn of the power? its still just a computer at the end of the day..

Future wars with aliens and robots. Our human masters will be laughing at the controls and will proclaim to be the saviours to who is left.

https://sites.google.com/site/hawkingprotocolofsafety/

List of (a lot of) organisations scrambling to avert a catastrophe from Superintelligence, including Oxford & Cambridge.

I think white box AI is much safer. AI based on the concepts found in rules engines, logic programming, NLP etc. This leads to an AI with a memory that is composed of English 'fact snippets' instead of a mess of numbers. Asimov style 'laws' become possible and readily inserted. Also I don't think self improvement should be touted as a holy grail. We don't need it. AI that is roughly as intelligent as a human engineer is all that we need. Controllable, understandable, human level AI.

Intelligence: "The capacity to acquire and apply knowledge...the faculty of thought and reason....capacity for learning, reasoning, and understanding; aptitude in grasping truths, relationships, facts, meanings, etc."

Intelligence is one thing, MIND is another. And because there is an unseen Spiritual Dimension involved, Science will never discover it. And Technology can never reproduce it. The undetectable source of "Intellect", what makes man HUMAN, imparts to him the faculty to THINK, perceive. Enables him to envision new things, like the dreamer, Leonardo Da Vinci. Or from lump of marble, create a "David", like the artist, Michelangelo.

~ Will "androids" have Electric Dreams? Not unless biological mind is behind the mechanical scenes.

A man-built assembly may be endowed with advanced-wrought "brain", astonishing in capacity, equipped with a sort of "intelligence"--but never will it have a "HEART": that which elevates man as far above the instinct-driven beast of field, as the Moon orbits beyond Earth. An automatized creation will never surpass its biological maker, in this vital sense.

Working machines may come to appear "intelligent", and no doubt, will be enabled with great automation. Perhaps they will even be given human-like features. But because there is "a spirit in man", assembled machines (even if built by machines) will never--not in a billion years of R&D--become independent SELF-AWARE entities. They may appear such, but the Mind of Human brain, somewhere down the mechanical line (as assembly-line), or signal stream, will always be present, as engineer, designer, programmer, what have you.

No umbilical of mechanized womb, apart the flow of human thought, will ever conceive to birth, an independent self-aware intelligence--bring-forth a conscious machine. From the brain of man, thoughtful MIND, the origin of Artificial Intelligence, given inanimate machine. Even one with legs, arms and a "face". The "eyes" of such, will always reflect no more than a souless robot: an engineered creation, a projection of man, imprinted with--HUMAN intelligence. A work built from human vision, with transferred intelligence uploaded, synthetically housed (even if it walks and talks).

The "light" of real intelligence, from cold lens of android eyes, will NEVER be perceived ("Nobody home"). That discernible essence felt from the "heart" of a human being. Its mirrored in the eyes of man, visible glints reflecting consciousness of MIND. Windows of THOUGHTFUL presence (somebody home): Man, the visionary builder; or woman, the deep thinker.

~ "[T]he most powerful force in the universe?" A.I.? Not in a trillion years. But then, science is blind-spotted to the supreme reality--the ultimate Power. Source of the intellect-imparting "spirit in man": which enabled him to rise from a dream of mind, loosed at Kitty Hawk sands for flight, to Eagle's epic journey, lunar soil of Mare Tranquillitatis alight!

I think we tend to overestimate the rapidity of certain kinds of future change. For example, portrayals of the year 2000 from around 1950 usually included highways built many stories in the air, as well as flying cars. Since a takeover by the machines has been so widely predicted, and since it will be decades before smart enough computers are available, I suspect angry humans will fight the early AIs to a stalemate and arrange to keep things that way. That is such a common pattern in human history.

The complete dictionary of the English language (or any human language for that matter) has given us a linguistic programming language of immense complexity, interactivity and interpretability. Numerically based programming languages are necessarily a lot less complex in order to be calculable at all. This ties in with the difference between electrochemical biological systems (like us) and electromechanical technological systems (like the AI machines).

If AI* is the wave of the future and humans go by the wayside, what of it? It is still us, created by us just as we are recreating ourselves today. It is just another way of reproducing and a part of our evolution. We will actually be better, more intelligent and will eventually evolve at the speed of light. Maybe then we can figure out what the hell it is all about.

*Won't be artificial intelligence when it reaches the reproducing stage.

In an idealistic world if AI would be possible we would be in competition with robots with super intelligence, but the world is a complex place filled with war, famine, and strife. With Climate Change, wars that span the globe, and cities in a constant state of decay; who would notice a robot with AI. We should look at how robots would interact with-in the fabric of human society, and not how robots would interact with human society based on fiction. Maybe robots with a higher intelligence would act with more humanity than present day humans are capable of?


"Maybe robots with a higher intelligence would act with more humanity than present day humans are capable of?"

Take away the biological ability to feel pain and the breeding imperatives giving rise to instincts like nurturing, and you'll never see a morality anything like our own. A failing AI would expect to be consumed were repair not possible. Its conciousness absorbed by a healthier unit. Unless concern for the treatment of other entities was hard-coded, it'd be very unlikely for it to arise on its own.

All depends on how you define "intelligence," doesn't it?

If we're calling "intelligence" to ability to absorb & quickly recall precise information, as IQ tests presume, yes, we're probably toast. We already have computers that can do that far better than we can.

Basic problem solving has certainly been an evolutionary plus for us, but it doesn't include perhaps our most interesting cognitive features: creativity & imagination. Imagination in particular has allowed us to not only take a syncretic approach to challenges - combining solutions to previous challenges in inventive ways, & I'm unsure we proved computers have even that capability - but to conceive of things that have never existed before & have insights that do not follow from standard logic. That "pure intelligence" - cataloguing, analyzing, recalling & applying collected information - is more essential a survival tool than imagination is an unproved presumption, a "fact" not in evidence.

Who is going to have the earliest access to this? people with power is who. as always , they will be hiding behind phony entities, such as states, corporations, religions, etc. power ALWAYS corrupts, and once they put these shields in front of them benevolence is out the window., unless for PR or some such trickery. there will be an excuse to use any technology to do harm. Always has.Let's not think so highly of ourselves to believe otherwise. We're fast getting too big for our britches and it's going to backfire sooner or later.


Post a comment

« A Moment of Awe --Recreating the Human Brain (VIDEO) | Main | Comment of the Day --1st Evidence Ever of a Comet Striking Earth »




1


2


3


4


5


6


7


8





9


11


12


13


14


15

Our Partners

technology partners

A


19


B

About Us/Privacy Policy

For more information on The Daily Galaxy and to contact us please visit this page.



E