What is AI?

Dan Schwartz <schwartz@iota.cs.fsu.edu>
Date: Tue, 14 Sep 93 21:22:56 -0400
From: Dan Schwartz <schwartz@iota.cs.fsu.edu>
Message-id: <9309150122.AA00320@iota.cs.fsu.edu>
To: interlingua@ISI.EDU
Subject: What is AI?
John McCarthy writes:

>Dan Schwarz wrote:
>
>     A definition I sometimes give to my students is: AI is that
>     field of endeavor which is concerned with understanding the
>     activities of the human mind and simulating those activities
>     on a computer.
>
>This is wrong for two reasons.
>
>1. AI is not limited to those methods of achieving goals that are
>used by humans.
>
>2. The main methods of research and development in AI involve
>investigating the kinds of intellectual problems that arise
>in the world in the achievement of goals.  The methods of
>psychology and neurophysiology are hardly used, although not
>for lack of trying.  In short, as Feynman said, the proper study
>of mankind is not just man, but the world.

Actually, I completely agree, and this comes out when I further clarify
and elaborate the meaning of the word "simulate."  An analogy I
sometimes give is that of a bull dozer.  Such machines are far more
effective for moving large piles of dirt than would be a group of humans
using their hands and finger nails.  This is why bull dozers were
invented.  They are extensions of the human body developed for achieving
a particular _goal_, and even though their manner of doing so involves
methods far different than those embodied in the human muscle system,
they nonetheless can be said to _simulate_ the same activity.

Similarly with simulating human reasoning on a digital computer.  Such
machines clearly do not employ the same electrochemical processes of the
human brain, but they nonetheless serve as extensions of the human brain
which achieve a similar _goal_, e.g., deductive inference.  

Thus construed, I think that my use of "simulate" meshes fairly well
with your terminology of goal achievement.

If there is an error in my definition, I think it stems only from the
deeper observation that, strictly speaking, one does not simulate human
reasoning directly, but rather one simulates certain _models_ of human
reasoning.  First-order logic is one example of such a model, and Prolog
provides a means to simulate a certain subset of that model.

Nowadays such models abound, and more are being discovered every day.
This is why I agree with Pat Hayes and disagree with John Sowa on the
matter of standards.  Human knowledge is in a perpetual process of
evolution, including even an evolution of our understanding of our own
cognitive abilities as well as of the evolutionary process itself.  Thus
it just doesn't make sense to me to speak of standards for the totality
of all present and future Krep.  Unless of course one might hope to
standardize a representation of the entirety of the general evolutionary
process---and I wish anyone very good luck with that!

The point about the _methods_ of psychology and neurophysiology is
well-taken, but I do believe that the _insights_ gained from these
fields have already had a significant impact, and I'm optimistic that
more will be forthcoming.  I view man as an aspect of the world, and I
like to study everything.

--Dan Schwartz