Re: clarifying clarifying ontologies

"Kenneth D. Forbus" <forbus@ils.nwu.edu>
Date: Mon, 7 Aug 95 21:55:49 CDT
Message-id: <9508080255.AA26171@aristotle.ils.nwu.edu>
X-Sender: forbus@ils.nwu.edu
X-Mailer: Windows Eudora Version 2.1.1
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
To: hovy@isi.edu (Eduard Hovy), "Kenneth D. Forbus" <forbus@ils.nwu.edu>
From: "Kenneth D. Forbus" <forbus@ils.nwu.edu>
Subject: Re: clarifying clarifying ontologies
Cc: phayes@cs.uiuc.edu (Pat Hayes), hovy@isi.edu (Eduard Hovy),
        fritz@rodin.wustl.edu (Fritz Lehmann), cg@cs.umn.edu,
        doug@csi.uottawa.ca, srkb@cs.umbc.edu
Sender: owner-srkb@cs.umbc.edu
Precedence: bulk
At 07:33 PM 8/7/95 -0700, Eduard Hovy wrote:
>This very strong statement might be true for qualitative physics, but it 
>certainly isn't for NLP.  Most large NLP systems, parsers and generators, 
>find it convenient to use taxonomies of high-level generalizations under 
>which their KR symbols are organized.  The most elaborate taxonomizations 
>are usually used for Objects, and the least elaborate for Qualities, with 
>Processes/Events somewhere in between.  
>
>No doubt if NLP systems performed detailed semantic inference as part of 
>their job they'd need the kinds of detailed axioms Ken and Pat are talking 
>about, but in practise, for many applications, NLP systems seldom need to 
>(instead being asked to operate over so wide a range that constructing 
>axioms to support such reasoning is impractical).  Mainly the systems 
>tend to need to know what general class of thing (syntactic or semantic, 
>depending on the system) a symbol belongs to, in order that it may be 
>properly handled.  That's what a taxonomy provides.  

These are good points, and I agree that this perspective makes sense.  

Let me inquire further: for any particular application surely there needs to
be some deeper semantics for some aspects of what the dialog is about?
Presumably the taxonomies in the NL system are reasonably consistent with
whatever axiomatization is used in those areas with the deeper semantics.
It would seem to me that the form of the deeper semantics would seriously
constrain the taxonomies.  In your experience, (a) do such mismatches occur,
and (b) if they do, how are they resolved?


>>I don't want to be too negative.  I think the growth of a community that
>>takes knowledge representation seriously, that is, is actually committed to
>>REPRESENTING KNOWLEDGE in ways that can be combined usefully to ultimately
>>create the kind of understanding of intelligence and very smart software
>>that we all ultimately want, is a wonderful thing.  But I ask that you
>>please don't postpone diving into the deep waters of actually fleshing out
>>the knowledge in some particular area(s) in favor of spending all your time
>>splashing around in the shallows of taxonomy creation.  
>
>It's not so easy to create useful taxonomies, actually.  Especially when 
>you have to worry about 50,000+ symbols.  

Okay, to stretch my metaphor, some folks are striving for distance rather
than depth, and it doesn't take very deep water to go a long way.  Fair enough.

        Ken