ADVERTISEMENT
  About the SA Blog Network













Observations

Observations


Opinion, arguments & analyses from the editors of Scientific American
Observations HomeAboutContact

Artificial Intelligence: If at First You Don’t Succeed…


Email   PrintPrint



CAMBRIDGE, Mass.—The last symposium in M.I.T.’s 150-day celebration of its 150th anniversary (who ever said that geeks don’t like ritual?) is devoted to the question: "Whatever happened to AI?"

Of course, that is a particularly appropriate self-introspection for M.I.T. because a lot of artificial intelligence action occurred there during the past 50 years. The symposium began Tuesday night with M.I.T. neuroscientist Tomaso A. Poggio setting the tone by declaring that the problem of making an intelligent machine is still "wide open."

Okay, there has been some progress: things like Deep Blue, Watson, MobilEye, among others. But the consensus was that new "curiosity-driven basic research" is needed and that AI-related computer science  should be integrated with neuroscience and the cognitive sciences, with specialized concentrations in areas like vision, planning, language and social intelligence. "I believe that 50 years later it is the time to try again," Poggio said.

 

M.I.T. has brought together a cast of heavy-weights to take on these big questions. A few gems along the way:

Nobelist Sydney Brenner: "I think consciousness will never be solved but will disappear. Fifty years from now people will look back and say, ‘What did they think they were talking about?’"

AI pioneer Marvin Minsky: "Why aren’t there any robots that you can send in to fix the Japanese reactors? The answer is that there was a lot of progress in robotics in the 1960s and 70s and then something went wrong."

Noam Chomsky on the purported success of statistical natural language learning methods that function by "approximating unanalyed data," while ignoring the underlying structure of language: "That’s a notion of success which is novel; I don’t know of anything in the history of science [like this]."  

 Image credit: MIT

 

 

 

 

 

 

 

 

 

 

 

 

 





Rights & Permissions

Comments 8 Comments

Add Comment
  1. 1. JamesSavik 2:02 pm 05/4/2011

    Anyone who has ever tried to program something knows that to program a system, you have to understand that system. I don’t think that anyone really understands intelligence- organic or artifical.

    Link to this
  2. 2. Gary Stix 4:08 pm 05/4/2011

    Dear JamesSavik:

    I couldn’t agree more. You have pinned the tail on the problem.

    Regards,

    Gary Stix

    Link to this
  3. 3. headlessplatter 8:44 am 05/5/2011

    I have a moderate understanding of chess, and a thorough understanding of alpha-beta pruning. I implemented alpha-beta pruning, and now it can whip me at chess every time. It was not my understanding of chess that helped me to write an advanced chess-playing program. Indeed, we must understand something in order to implement AI, but that something will not be intelligence. Intelligence will be the surprising adeptness of the program that emerges when we find that something.

    Link to this
  4. 4. syntience 2:27 pm 05/5/2011

    If at first you don’t succeed, lower your expectations.

    AI research has insisted on correctness in a problem domain – our mundane everyday world – where correctness is unachievable; the best we can do (and the purpose of intelligence) is to jump to conclusions on insufficient evidence… To guess, but guess wisely based on a lifetime of experience. And a large fraction of those guesses are predictions about what will happen next.

    Human intelligence is not based on reasoning, it is based on understanding. Reasoning is a conscious step-by-step logic-based process that takes seconds to years whereas understanding is a subconscious intuition-based and instantaneous. But reasoning is not possible unless you understand what you are reasoning ABOUT. About 99.999% of the brain’s processing bandwidth is subconscious understanding; this means AI research has been spending 60 years on the wrong 0.001% of the problem [ ref Nørretranders: The User Illusion ].

    http://artificial-intuition.com discusses this idea and its ramifications in about 6 pages.

    http://videos.syntience.com has five videos (four by me and one by Peter Norvig (Director of research at Google) discussing these issues also. You may want to start with the video "A New Direction In AI Research".

    See also the introductory discussion at http://hplusmagazine.com/2011/03/31/reduction-considered-harmful

    – Monica Anderson
    Director of Research, Syntience Inc.

    Link to this
  5. 5. Michael137 10:25 pm 05/5/2011

    To teach a machine to think it will have to keep track of what it learns and whether it can can tag that as TRUE. It should test against directly observed facts if possible. For example, computers can be equipped with thermometers. A computer be given a statement that "All computers operate at 800 degrees Kelvin." The computer should be able to compare that with its own temperature and determine that it is being told an untruth, or lie. Learning consists of thousands or millions of facts, something like CYC uses. Also it should be remembered that computers are learning for the sake of silicon; it is their element.

    It will have to be able to correlate this, for instance, with particular words, and then it will have to be able to evaluate the results to add up how many new correlations the result creates in its existing data.

    Link to this
  6. 6. Michael137 10:25 pm 05/5/2011

    To teach a machine to think it will have to keep track of what it learns and whether it can can tag that as TRUE. It should test against directly observed facts if possible. For example, computers can be equipped with thermometers. A computer be given a statement that "All computers operate at 800 degrees Kelvin." The computer should be able to compare that with its own temperature and determine that it is being told an untruth, or lie. Learning consists of thousands or millions of facts, something like CYC uses. Also it should be remembered that computers are learning for the sake of silicon; it is their element.

    It will have to be able to correlate this, for instance, with particular words, and then it will have to be able to evaluate the results to add up how many new correlations the result creates in its existing data.

    Link to this
  7. 7. fyngyrz 7:20 am 05/6/2011

    "Anyone who has ever tried to program something knows that to program a system, you have to understand that system."

    No. The evidence is entirely otherwise. You can learn, can you not? But do you understand your mind? No. Of course you don’t. We have very little idea how it works as yet. Yet you *can* learn. Consequently, we have absolute proof that a thorough (or even marginal) understanding of how a system works is not required for extremely high level use of that system.

    There are any number of avenues through which AI may be achieved. The problem is simply unsolved. It is, however, a problem with a solution: we — humans — stand as 100% perfect evidence that intelligent systems can and do exist. We know that our own implementation requires a large number of components, connected in 3D, operating in parallel waves. That’s about all we know. But that doesn’t mean we can’t make a system unlike our own, nor does it mean that we can’t build one without completely understanding it. Just as a kid building his first ham radio doesn’t need to understand particle physics in order to mesh tubes or even transistors together into a functional machine, the odds hugely favor creation of AI without complete (or even very extensive) understanding of the low level details.

    Further, once it’s been done, it’s a problem unlike any other, in that the results can be replicated almost instantly and at near zero cost.

    Assuming only we survive the damage our leaders are intent upon inflicting upon our society, this problem will fall just like many other apparently hard problems have. From the other (solved) side, it probably won’t even appear difficult.

    Link to this
  8. 8. elhippiesupremo 7:26 am 05/7/2011

    All intelligence is artificial.

    -It is limited by our view of reality, and distorted by our perceptions.
    For more realistic intelligence:
    >More clarity and detail in sensory input is needed.
    >Less complication in between inference and reality.
    Simplicity brings one closer to reality.
    Clarity allows one to know reality.
    With good Inference(perception,information,…Intelligence), one can interact effectively with reality.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American MIND iPad

Give a Gift & Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now >>

X

Email this Article

X