Scholar calls Artificial Intelligence back to its origins

-A A +A
By Roger Snodgrass

Not every goal in science and technology can be achieved by massive amounts of funding alone. At times it takes more than integrating the components.

Sometimes, according to Pei Wang, one of the scholars in the field of Artificial General Intelligence, the answer must be found in having the big picture.

“Artificial General Intelligence is the old Artificial Intelligence,” he said, describing a distinction invisible to most people outside the immediate field. “For several years, I refused to yield to that new term. Later, I began to see the use of this new label.”

The field of what is commonly known as artificial intelligence began in the first half of the 20th century expecting to make machines that could think. The first pioneers mistakenly thought the job could be done in relatively short order.

A proposal written by John McCarthy of Dartmouth College called for a two-month, 10-man study of artificial intelligence that was carried out in the summer of 1956.

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it,” McCarthy wrote, expecting “significant advances could be made on several anticipated problems if a carefully selected group of scientists worked on it.”

A contemporary descendent from that tradition, Wang visited Los Alamos National Laboratory and other venues in northern New Mexico last week, where he talked about what had become of the quest since then and his own approaches to its riddles.

Wang is the editor of the “Journal of Artificial General Intelligence” and one of the leading voices calling for a return to the roots of the problem. He teaches programming, data structures and algorithms at Temple University in Philadelphia, but he is also working on his own blueprint for a thinking machine.

Computers may be the closest approximation we have to an artificial-intelligence-like device, but one of the most stunning realizations in the last century was that engineers can’t just increase the speed and memory of a computer and expect it to start having thoughts.

Despite impressive performances by computers, especially in the last two decades and some lavish investments, including a billion-dollar thrust by the Defense Advanced Research Projects Agency from 1983-1993, not a single computer has yet been caught in the act of thinking.

IBM’s Deep Blue was programmed to beat a human grand master at chess, but “can do nothing else,” Wang said.

The effort that was once unified has dispersed.

There are daily stories about new feats of autonomous battlefield vehicles, but critics of mainstream research in artificial intelligence say much of the effort is now trivial and disconnected.

A recent paper by the Georgia Institute of Technology, for instance, developed a prioritized list of household objects for robots to retrieve for physically impaired patients.

The study developed an inventory of 43 objects, headed by a TV remote, a medicine pill, prescription bottle, eye glasses and the cordless phone, specified by maximum weight and size, as a guideline for the design and performance criteria.

That way, when there is a homecare robot, it would surely have reliable specifications for its retrieval duties.

Meanwhile the robot that can learn and solve problems on its own is nowhere to be found.

In a colloquium at LANL’s Center for Non-Linear studies Wang defined intelligence as “the ability to adapt to the environment and work with insufficient knowledge and resources.”

Along with that, Artificial General Intelligence would “be open to novel experiences” and “respond in real time based on a finite processing capacity.”

His emphasis is less on an overstocked machine and more on the “reasoning system” that runs the machine by means of a formal language, some rules, a memory structure “to store the knowledge or tasks” and a control mechanism “to decide which inference to carry out.”

His system is called NARS, which stands for Non-Axiomatic Reasoning System, and is documented at http://nars.wang.googlepages.com/home.

An axiom is a “self-evident or “universally recognized” truth. But an axiomatic reasoning system does not adequately reflect how much uncertainty there is in our perceptions or interpretations of the world.

“When it comes to human thinking,” said Wang, “almost nothing is certain.”

Although his explanation quickly becomes very technical, one of the interesting things about a “non-axiomatic” reasoning system is that it emphasizes that nothing can be taken for granted, but rather is based on experience.

In a second talk, at the Santa Fe Complex, a few days later, Wang gave a more general presentation about Artificial General Intelligence, offering examples of a number of active alternatives and once again emphasizing that the problem needed to be solved as a whole.