Re: A simplistic definition of "ontology"
hovy@isi.edu (Eduard Hovy)
X-Sender: hovy@quark.isi.edu
Message-id: <v02120d2dac99b5e772b7@[128.9.208.191]>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Thu, 5 Oct 1995 12:04:30 -0500
To: phayes@ai.uiuc.edu (Pat Hayes)
From: hovy@isi.edu (Eduard Hovy)
Subject: Re: A simplistic definition of "ontology"
Cc: srkb@cs.umbc.edu
Sender: owner-srkb@cs.umbc.edu
Precedence: bulk
At 2:15 PM 10/5/95, Pat Hayes wrote:
>At 2:59 PM 10/4/95 -0500, Eduard Hovy wrote:
>>What does "specify a concept" mean, if not just to list the relationships
>>of the concept with other concepts?
>
>It means giving a structured *theory* of those relationships, not just
>listing relation names and assuming that the reader knows what they mean. I
>like my theories written as axioms, but everyone has their taste: but in
>any case, its a lot more than just a list. You can't infer anything from a
>list.
Words like "theory" (and especially "*theory*" :-) ) confuse me; may I
rephrase how I read your position? You define an ontology as a set of
symbols and relationships among them. Some of these relationships are
also named by symbols (like "cause" or "actor") and others are not (and
these you call inferences). (I know most KR people will think that
viewing inferences/axioms as relationships is crazy and stop reading,
but the result is a more creative open-minded audience; great.) For
this latter kind of relationship (namely, inferences/axioms) you also
imagine(? have? specify?) an engine that "reads" them and "creates" new
relationships, such as:
- instantiation: "relating" a concept to the privileged state of
Being-an-Instance (of course this is not the standard way of looking
at it and all the remaining KR people will stop reading at this point,
but I hope you'll see what I am trying to say);
- Being-True (or Being-False): "relating" a relation to the privileged
state of Truth or Falsity (ditto to above);
- adding new aspects to existing instances: "relating" one instance to
another, such as inferring that X is a Mother because X is Female and
has Children -- the standard notion of inference.
What puzzles me is that you seem not to want to talk about your engine;
you just want the axioms to be there to support inference should some
engine happen to come along. This seems to me real close to what the
listifying 'glosser'-ontologizers do; they also don't worry about an
engine, and they do include in their "ontologies" such things as number
constraints (usual number of eyes for a typical human is two, etc.).
What you need from a 'gloss'-type ontology for NLP, for example, is not
just a list (i.e., a taxonomy). You need more. When an NLP parser
analyzes "both his eyes were blue" (versus "both his eyes and his nose
are runny"), it needs to look up the likely number of eyes that are blue
and find two, to allow "both" to bind in. The fact that glosser-ontolo-
gizers write their knowledge as `passive' default-type and put the
action in the parser/analyzer, while you write your knowledge in some
other form presumably(?) more(?) amenable to an `active'(?) engine seems
to me irrelevant. In both cases you can support the inference that
"his eyes are blue" means we are talking about two eyes.
The difference between the two (psuedo-)types of ontologizer to me lies
in the mode of use of the inferential knowledge. This seems to me simply
a reflection of the different kinds of tasks they are doing. NLP people
put their inferences into the lexicon, sometimes into the grammar, and
only when they get heavily semantic do they start putting them into the
KR system, but their inference engine is usually separate from the KR
system, buried inside the parser/analyzer/lexical item selector -- which
is why most NLP-ontologizers today are of the glosser ilk.
It's a mystery to me why people would want to use inferences in an active
mode within a KR system without having a clear task in mind. As soon as
you have a realistic number of inference possibilities you end up with
the inference explosion problem (as Rieger & Schank did even in the
early 70's). You must do inference on demand, and the demand must come
from an application system, and must therefore be goverened somehow by
that system. Without a clear task, you end up writing passive inferences
without a particular engine, and hence you do the same thing the glosser-
ontologizers are doing, but focusing more of your attention on writing
the inferences and less on the engine.
>My worry about the glossaries is that they lack such a thing, and indeed seem
>not intended to ever have such a thing: they are organised repositories of
>information for human readers, and thats ALL.
While it is true of most glosser-type ontologies today that they lack a
lot of inferential/default/etc. knowledge, I think that's not a necessary
outcome of some AI tasks in principle. One must not let a too-brief
historical perspective support overhasty differentiation...
>I know this may not be in the West Coast
>spirit of Unification and Wholeness, but I think that in science it is
>better to actually seek out and catalog differences rather than obscure
>them.
Way cool, dude.
E
----------------------------------------------------------------------------
Eduard Hovy
email: hovy@isi.edu USC Information Sciences Institute
tel: 310-822-1511 ext 731 4676 Admiralty Way
fax: 310-823-6714 Marina del Rey, CA 90292-6695
project homepage: http://www.isi.edu/natural-language/nlp-at-isi.html