Tuesday, April 27, 2010

“How Long Till Human-Level AI? What Do the Experts Say?”

Regarding this topic --- there is a very good article entitled “How Long Till Human-Level AI? What Do the Experts Say?” written by Ben Goertzel, Seth Baum, Ted Goertzel at http://hplusmagazine.com/articles/ai/how-long-till-human-level-ai  

To me its most important information is in the figure entitled “When Will Milestones of AGI be Achieved without New Funding”. It indicates that, of the 21 attendees at the AGI 2009 conference who answered the survey, 42% think AGI’s capable of passing the Turning Test will be created within ten to twenty-years.

Oddly that is slightly more than the 38% who think AGI’s would achieve the human-like capabilities of a 3rd grader within the same time frame. This might reflect the fact that too many of the attendees have been influenced by the famous Eliza experiment, which was a quasi Turing Test that actually managed to fool some people into thinking they were reading text generated by a human doctor --- using mid-1960s computers.

I have always assumed the Turing test would be administered by humans who understood human psychiatry and brain function, and artificial intelligence sufficiently that they would be able to smoke out a sub-human intelligence relatively quickly in the Turning Test.

In fact, I am the person quoted in that article for giving my reasons why I thought it would be more difficult to make a computer pass the turning test than to posses many of the other useful intellectual capabilities of a powerful human mind --- as quoted in the paragraph that follows:

“One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent — in certain ways — than it actually was. There is no compelling reason to spend time and money developing this capacity in a computer.”


I thought the idea --- suggested in one of the survey questioned mentioned in the article --- that AGI might be funded by 100 billion dollars is a little rich. I understand, however, such a large figure was picked to --- in effect --- ask how people how fast they thought AGI would be developed if money was virtually no obstacle.

I think AGI could be developed over ten years for well under 500 million dollars if the right people were administering and working on the project. (This does not count all the other money that is already likely to be invested in electronics, computer science, and more narrow AI in the coming decade.) Unfortunately, it would be hard for the government to know who were the right people, and what were the right approaches, for such a project. But I believe a well designed project, designed to achieve human level AGI, almost certainly could succeed in ten years with only 2 to 4 billion dollars of funding over that period. Such a project would fund multiple teams with say 10 to 30 million dollars to start, and then increasingly allocate funding over time to the teams and approaches that produced the most promising results.

2 to 4 billion dollars over ten years would be totally within the funding capacity of multiple government agencies.

Developing AGI in that time frame would be exceptionally valuable to America --- because it would give a tremendous chance to save our economy before its is bled to death --- by our trade imbalance with the rapidly developing world --- and --- by the many tens of trillions of dollars of in health care and other unfunded benefits America owes its seniors and government workers.

No comments:

Post a Comment