Re: clarifying clarifying ontologies
Peter Clark <pclark@cs.utexas.edu>
From: Peter Clark <pclark@cs.utexas.edu>
Message-id: <199508091846.NAA01462@firewheel.cs.utexas.edu>
Subject: Re: clarifying clarifying ontologies
To: fritz@rodin.wustl.edu
Date: Wed, 9 Aug 1995 13:46:05 -0500 (CDT)
Cc: cg@cs.umn.edu, srkb@cs.umbc.edu
X-Mailer: ELM [version 2.4 PL24alpha3]
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 3250
Sender: owner-srkb@cs.umbc.edu
Precedence: bulk
>> She makes three basic distinctions:
>> 1. A _hole_ is a description of an event that allows information to flow
>> through.
>> 2. A _filter_ restricts the flow.
>> 3. A _plug_ blocks the flow.
>> Based on these three distinctions (with all + and - combinations of them)
>> she derives her _aspectual cube_ with eight kinds of verbs or aspects of
>> verbs at the corners. [John Sowa]
> This example illustrates the important and generally unduly-
> neglected _product_structure_ of a taxonomy. Much of the large-scale
> structure of conceptual hierarchies can be analyzed into a lattice or
> other poset which is a direct-product of independent factor-hierarchies.
> Each factor is a "conceptual dimension" of the main hierarchy.
> We need to see more of this "factor analysis" of ontologies. [Fritz Lehmann]
I really like this notion of "factor analysis" too. Trying to represent
some computing concepts, we sometimes end up writing things which
look vaguely analagous to products of factors, eg.
database = secure lockable virtual container
password = private virtual key for access
ticket = time-limited token of identity
Specific concepts seem like combinations of more general concepts.
Eg. I'd like to somehow combine models of a container, security,
a lock and virtual objects to arrive at a model of a database.
These more general concepts form the "factors" for the more specific
ones. (This is essentially reiterating the notion of "cliches" which
keeps resurfacing in AI under various names). These "factors", or
general components, are the building-blocks from which more specific
concepts can be built. And they can be combined in a whole variety of ways.
This is why I find the Compositional Modelling model of knowledge
representation and reuse so appealing: You have a library of components,
and you plug them together to form more specific descriptions (which could
then themselves be components).
Anyway, the (maybe controvertial) point is that taxonomic issues then become
secondary to the issue of building components. And how would you build a
taxonomy anyway? Every concept is a combination of more general aspects, so
do I have lots of ISA links from that concept each pointing to a more
general aspect? Where do you stop?
Consider: Is a person a container? Well, not really....but then sometimes
it's useful to think of a person as a container. My main point is
that this isn't a useful question. After all, almost everything isa
something-else in some sense, and you just end up with a mess if you
try the ontology exercise seriously. Rather, sometimes I may want
to model a person as a container (in which case I'd like to plug in a
"container" component into my "person" model), other times I might not.
The comments about "factor analysis" and Ken Forbus's comments on
compositional modelling seem to point in this direction too.
(And no we haven't got a good idea of how to do any of this yet!)
Best wishes!
Pete
---
Peter Clark (pclark@cs.utexas.edu) Department of Computer Science
tel: (512) 471-9565 University of Texas at Austin
fax: (512) 471-8885 Austin, Texas, 78712, USA.
Project homepage: http://www.cs.utexas.edu/users/mfkb