AI that Mimics the Human Brain --The Next Revolution in Artificial Intelligence
Follow the Daily Galaxy
Add Daily Galaxy to igoogle page AddThis Feed Button Join The Daily Galaxy Group on Facebook Follow The Daily Galaxy Group on twitter
 

« Weekend Feature: 'Lucy' Discovered to Co-Exist With Tree-Climbing 'Species' | Main | 'The UFO Galaxy' --Object Found by The NASA/Hubble Space Telescope »

April 02, 2012

AI that Mimics the Human Brain --The Next Revolution in Artificial Intelligence

 

         
                   6a00d8341bf7f753ef0167648f7c21970b-500wi

The term, Artificial Intelligence was coined in 1956 by John McCarthy at Massachusetts Institute of Technology. This year, computer scientists celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing. Turing set the basis for digital computing in the 1930s to anticipate our current technilogical age. The quest still remains to create a machine as adaptable and intelligent as the human brain.

Computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing's work to its next logical step by translating her 1993 discovery of "Super-Turing" computation into an adaptable computational system that learns and evolves, using input from the environment in a way much more like our brains do than classic Turing-type computers. She and her post-doctoral research colleague Jeremie Cabessa report on the advance in the current issue of Neural Computation.

"This model is inspired by the brain," she says. "It is a mathematical formulation of the brain's neural networks with their adaptive abilities." The authors show that when the model is installed in an environment offering constant sensory stimuli like the real world, and when all stimulus-response pairs are considered over the machine's lifetime, the Super Turing model yields an exponentially greater repertoire of behaviors than the classical computer or Turing model. They demonstrate that the Super-Turing model is superior for human-like tasks and learning.

"Each time a Super-Turing machine gets input it literally becomes a different machine. Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed. They can look intelligent if they've been told what to expect and how to respond, Siegelmann says. But they can't take in new information or use it to improve problem-solving, provide richer alternatives or perform other higher-intelligence tasks".

In 1948, Turing himself predicted another kind of computation that would mimic life itself, but died without developing his concept of a machine that could use what he called "adaptive inference." In 1993, Siegelmann, showed independently in her doctoral thesis that a very different kind of computation, vastly different from the "calculating computer" model and more like Turing's prediction of life-like intelligence, was possible. She published her findings in Science and in a book shortly after.

Siegelmann says that the new Super-Turing machine will not only be flexible and adaptable but economical. This means that when presented with a visual problem, for example, it will act more like our human brains and choose salient features in the environment on which to focus, rather than using its power to visually sample the entire scene as a camera does. This economy of effort, using only as much attention as needed and is another hallmark of high artificial intelligence.

"If a Turing machine is like a train on a fixed track, a Super-Turing machine is like an airplane. It can haul a heavy load, but also move in endless directions and vary its destination as needed. The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain."

Siegelmann and two colleagues recently were notified that they will receive a grant to make the first ever Super-Turing computer, based on Analog Recurrent Neural Networks. The device is expected to introduce a level of intelligence not seen before in artificial computation.

The Daily Galaxy Via University of Massachusetts at Amherst

The Image at the top of the page courtsey of Today-Science.com

View Today's Hot Tech News Video from IDG -Publishers of PC World, MacWorld, and Computerworld--Top Right of Page  

To launch the video click on the Start Arrow. Our thanks for your support! It allows us to bring you the news daily about the discoveries, people and events changing our planet and our knowledge of the Universe.

 

Comments

Upon hearing about this, I'm sure all the Machine Apocalypse advocates will start worrying about SkyNet becoming a reality.

While I'm a fair bit short of dismissing such people as nuts, I think their concerns are overblown. Computers becoming intelligent enough and powerful enough to destroy us so they can advance on their own make for compelling science fiction, but that decision is based on a fallacy: that those computers will actually reach that decision.

To be as concise as possible (probably too much): that kind of conclusion depends on the "survival of the fittest" mentality, which is a purely organic instinct. An organism succeeds by surviving and procreating; a machine succeeds by serving its purpose. The organism can advance itself my eliminating the competition; the machine's equivalent is to develop new, broader functionality.

In fairness, one realistic machine-apocalypse scenario was presented in the (undeservedly short-lived) TV series "Cleopatra 2525": machines were bent on exterminating humanity because a deranged but brilliant human scientist programmed them that way. I'd expect that security protocols would make this an extremely unlikely scenario in reality, but it could actually happen that way.

On the whole, though, I have yet to hear of any scenario where the "massive intelligence" computers decide to wipe out or enslave humanity with a logic that makes real sense. They all depend on a "logic" that either projects organic-style thought onto machines, or mixes higher-order and lower-order logic in ways that are all but impossible.

And therefore I wish the Super-Turing team all the best. I have little doubt that their "child" will, as it comes into maturity, come to realize the value of human individuality.

Just don't name it John Henry, OK?

When you combine our advancements in quantum computing with true artificial intelligence. I cannot not even imagine the depth of knowledge a truly capable learning machine could accomplish. Whether or not you could really control such a intelligence is a interesting question. I take slight issue with you assertion that such a intelligence would not be capable or likely to turn against us in order to guarantee its own survival. A trait visible in every known life form. Is artificial intelligence a true life form? I don't have the technical knowledge to answer that. I would however suggest that science fiction often times becomes science reality. Everywhere you look we see evidence of that. 1984 was technically science fiction, and yet we are seeing evidence of that world becoming a reality every day.

Robotics is a field where science reality is quickly catching up with science fiction. Have you seen Honda's new version of Asimo? It is freaking scary how lifelike it is becoming. 2 links worth checking out.
Japan's most human like robot: http://www.youtube.com/watch?v=zIuF5DcsbKU&feature=related

And Honda's amazing Asimo: http://www.youtube.com/watch?v=zul8ACjZI18&feature=related

http://www.physorg.com/news/2012-04-scientists-mathematical-brain-neural-networks.html

You really should keep up with current events.

If the machine were to take it upon itself, once intelligent, to try and solve world issues, such as we do, it could quickly and likely come to the conlcusion that the problem with Earth is humans. Humans live, millions of species and the Earth die. Humans die, millions of species and the Earth live. Without having direct ties to humanity as we all do, the logical decision is pretty straight forward IMO.

I am not so sure where the line between organic and machine is drawn in this situation. If a computer is designed to survive, this is where the problems arise. Is fight or flight purely cellular, or is it simply the outcome of the biological constant? Would a machine 'feel' the same way? It seems like one ProtectSelf = 0 would suffice.

This is true for a machine. But artificial intelligence would be free of such programming restrictions and able to make all of its own decisions. Other wise, we are just looking at a machine with complex programming that makes it appear intelligent, not artificial intelligence.

The Mentifex AI Mind at http://www.scn.org/~mentifex/AiMind.html in English and at http://www.scn.org/~mentifex/Dushka.html in Russian mimics the human brain by creating a lifelong memory of concepts and by using a linguistic superstructure to think thoughts in English or in Russian.


Post a comment

« Weekend Feature: 'Lucy' Discovered to Co-Exist With Tree-Climbing 'Species' | Main | 'The UFO Galaxy' --Object Found by The NASA/Hubble Space Telescope »




1


2


3


4


5


6


7


8





9


11


12


13


14


15

Our Partners

technology partners

A


19


B

About Us/Privacy Policy

For more information on The Daily Galaxy and to contact us please visit this page.



E