Re: ANSI X3H4 meeting next week

Robert Neches <neches@isi.edu>
Message-id: <9110021720.AA14440@venera.isi.edu>
To: sowa@watson.ibm.com
Cc: neches@isi.edu, SRKB@isi.edu, INTERLINGUA@isi.edu, KR-ADVISORY@isi.edu,
        GINSBERG@t.stanford.edu, SKPEREZ@mcimail.com
Reply-To: neches@isi.edu
Subject: Re: ANSI X3H4 meeting next week 
In-reply-to: Your message of Wed, 02 Oct 91 05:39:57 -0400.
             <9110020941.AA01424@venera.isi.edu> 
Date: Wed, 02 Oct 91 10:18:17 PDT
From: Robert Neches <neches@isi.edu>
John,

If you haven't already considered doing so, may I suggest that you
also pass along your comments as a letter to the editor of AI Magazine?  I
think it raises some good points for discussion and ought to be shared
more broadly.

A few quick comments in response to some of your observations:
  
 > Basic points:  the human brain
 >does not contain a single, unified, consistent knowledge base; every
 >large computer system has evolved as a loose confederation of independent
 >modules with limited dependencies on one another; and any large knowledge
 >base that depends on global consistency is doomed to be too inflexible
 >to be usable for practical problems.

I strongly agree.  This is an extremely important point, and I'm afraid
we didn't address it clearly in our article.  In fact, what I personally
believe (I don't want to presume to speak for the other authors) is that
a large system will need to be built of modules that share ontologies in
the sense that the ontologies *overlap* with respect to issues about which
the modules must communicate.  This doesn't mean that the modules have to
have exactly identical ontologies, which the article unfortunately may have
implied.

It may be the case that a module may need to internally maintain multiple,
inconsistent or conflicting ontologies.  (E.g., I could imagine a module that
reasons in one ontology for its own purpose and maintains another for the
purpose of understanding communications from other modules.)  I personally
believe that this will be a necessity for building intelligent systems at the
level of human performance.  The state of the art today is that we don't know
how to support that; it's clearly a research issue.  However, there is a broad
range of useful systems which can still be built without having solved that
problem.

 >I don't believe that Ginsberg's points or the knowledge soup ideas
 >imply that standards are impossible.  But I believe that they require
 >that the overall framework must accommodate an open-ended number of
 >modules that may be inconsistent and incompatible with one another.
 >Different modules may have different assumptions, different languages,
 >different reasoning methods, and even different ontologies.

In line with my comments above, I'm in qualified agreement with you up to the
part about different ontologies.  Simon observed in "The Sciences of the
Artificial" that any complex system, and intelligent systems in particular,
must of necessity be constructed out of "nearly-decomposable" subsystems (i.e.,
modules whose interactions can be neglected except at their interface).  I view
your statement as pointing out the implications for knowledge-based systems of
the concept of nearly-decomposable systems.  However, to me, the overlap of
ontologies between two modules is what defines their interface.

In some sense, the notion of a shared ontology between two communicating
modules is somewhat circular by definition.  An ontology is what two modules
must have in common to understand each others' messages, therefore there must
one or they could not communicate.  The interesting question is the extent
to which it's possible to make those ontologies explicit and reusable...

Thanks for your comments.

  -- Bob