Re: A simplistic definition of "ontology"

phayes@ai.uiuc.edu (Pat Hayes)
Message-id: <199510052112.AA03241@a.cs.uiuc.edu>
X-Sender: phayes@tubman.cs.uiuc.edu
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Thu, 5 Oct 1995 16:27:53 -0600
To: hovy@isi.edu (Eduard Hovy)
From: phayes@ai.uiuc.edu (Pat Hayes)
Subject: Re: A simplistic definition of "ontology"
Cc: srkb@cs.umbc.edu
Sender: owner-srkb@cs.umbc.edu
Precedence: bulk
Ed, we are talking past one another. I think that you are understanding
'glossary' in a different way than I intend; perhaps this is my fault, I
used the word informally because I didnt have another one. I have no case
to make against anything that is done in NLP. I repeat, take a look at the
'business' glossaries that some of the more applied ontologizers wish to
include in the fold. They seem (?) to have nothing to do with NLP or any
other AI activity.

However, that having been said.....

At 12:04 PM 10/5/95 -0500, Eduard Hovy wrote:
>At 2:15 PM 10/5/95, Pat Hayes wrote:
>>At  2:59 PM 10/4/95 -0500, Eduard Hovy wrote:
>>>What does "specify a concept" mean, if not just to list the relationships 
>>>of the concept with other concepts? 
>>
>>It means giving a structured *theory* of those relationships, not just
>>listing relation names and assuming that the reader knows what they mean. I
>>like my theories written as axioms, but everyone has their taste: but in
>>any case, its a lot more than just a list. You can't infer anything from a
>>list.
>
>Words like "theory" (and especially "*theory*" :-) ) confuse me; may I 
>rephrase how I read your position?  You define an ontology as a set of 
>symbols and relationships among them.  Some of these relationships are 
>also named by symbols (like "cause" or "actor") and others are not (and 
>these you call inferences).  (I know most KR people will think that 
>viewing inferences/axioms as relationships is crazy and stop reading, 
>but the result is a more creative open-minded audience; great.) 

Probably just a more confused audience (like me at this point).
Congratulations. What kind of relationships are these? Relationships
between what? Even if they are relationships, they arent the relations that
the KR language is talking about, since those are MENTIONED in the
inferences.(I will use uppercase for emphasis since asterisk-brackets
confuse you.Italic would be better.)

Example: an ontology uses the symbols 'before' and 'when', and allows such
constructions as 'before(when(end-of-WW2),(when(Johnson-is-president))'. An
ontology might include such axioms as 'before(x,y) implies not
before(y,x)',and from this and the previous we can conclude 
'not before(when(Johnson-is-president),(when(end-of-WW2))' by using familar
logical principles. Call the first long sentence A and the second one B,
and we could say that the ontological axiom has defined a 'relationship'
between A and B, but THAT relationship isnt anything to do with
time-ordering (in particular).

 For 
>this latter kind of relationship (namely, inferences/axioms) you also 
>imagine(? have? specify?) an engine that "reads" them and "creates" new 
>relationships, such as: 
>- instantiation: "relating" a concept to the privileged state of
>  Being-an-Instance (of course this is not the standard way of looking 
>  at it and all the remaining KR people will stop reading at this point, 
>  but I hope you'll see what I am trying to say); 
>- Being-True (or Being-False): "relating" a relation to the privileged 
>  state of Truth or Falsity (ditto to above); 
>- adding new aspects to existing instances: "relating" one instance to  
>  another, such as inferring that X is a Mother because X is Female and 
>  has Children -- the standard notion of inference.  

Yes, OK, see above. If you want to talk in this curious way, you are free
to do so: just lets be clear that we are not using language in the same
way. 

By the way, this ISNT the standard notion of inference: to get that
conclusion, you need also something like ((has-children(x) & female(x))
implies mother(x), which might well be one of the axioms of the motherhood
Ontology. Whether you call it an axiom or you call it built-in code
probably depends on whether the system is implemented in Prolog or C++, but
it has to be there somewhere.

>What puzzles me is that you seem not to want to talk about your engine; 
>you just want the axioms to be there to support inference should some 
>engine happen to come along. 

See the concept of 'epistemological adequacy' described in McCarthy and
Hayes 1969 and many papers written since.

This seems to me real close to what the 
>listifying 'glosser'-ontologizers do; they also don't worry about an 
>engine, and they do include in their "ontologies" such things as number 
>constraints (usual number of eyes for a typical human is two, etc.).  
>What you need from a 'gloss'-type ontology for NLP, for example, is not 
>just a list (i.e., a taxonomy).  You need more. 

Well then why did you define an ontology to be a list of relations? I have
been responding to that, but now you seem to have changed your mind.

 When an NLP parser 
>analyzes "both his eyes were blue" (versus "both his eyes and his nose 
>are runny"), it needs to look up the likely number of eyes that are blue 
>and find two, to allow "both" to bind in. 

That term 'bind in' is where the actual inference is done, I suspect. You
want to call it binding or whatever, fine: the point is that there are some
(semantically principled) rules/systems/engines/whatever which manipulate
the formalism. 

 The fact that glosser-ontolo-
>gizers write their knowledge as `passive' default-type and put the 
>action in the parser/analyzer, while you write your knowledge in some 
>other form presumably(?) more(?) amenable to an `active'(?) engine seems 
>to me irrelevant. 

This is a misunderstanding. Yes, this is irrelevant to the point I was
making. The question is whether or not the 'Ontology' is intended,
ultimately, to be used by any kind of machine; or whether you need a degree
in management science even to read it, and that is perfectly acceptable.

.....
>
>It's a mystery to me why people would want to use inferences in an active 
>mode within a KR system without having a clear task in mind.  As soon as 
>you have a realistic number of inference possibilities you end up with 
>the inference explosion problem (as Rieger & Schank did even in the 
>early 70's).  You must do inference on demand, and the demand must come 
>from an application system, and must therefore be goverened somehow by 
>that system.  Without a clear task, you end up writing passive inferences 
>without a particular engine, and hence you do the same thing the glosser-
>ontologizers are doing, but focusing more of your attention on writing 
>the inferences and less on the engine.  

This is a different point, but for the record: you are missing the point
(as Doug did at the panel discussion!). Of course just the axioms don't
give a tractable search problem; the ontological issue is the extent to
which the LOGIC - the inference principles underlying the actual tactics of
any particular system - is able to define the concepts that the names are
supposed to refer to.

 Suppose someone develops a 'Penrose engine' which uses quantum gravity to
just somehow magically produce a conclusion, without applying an inference
rule to anything. The expressive power of the KR language would still be an
issue, because if the language is incapable of expressing something, then
God himself wouldnt be able to produce the proper conclusions.
.....

Pat

------------------------------------------------------------------------------
Beckman Institute                                      (217)244 1616 office
405 North Mathews Avenue              (415)855 9043 or (217)328 3947 home
Urbana, Il.  61801                                     (217)244 8371 fax

Phayes@cs.uiuc.edu