Re: Ontology for EDI (was Frames...)
phayes@cs.uiuc.edu (Pat Hayes)
Message-id: <199409211915.AA00906@dante.cs.uiuc.edu>
X-Sender: phayes@dante.cs.uiuc.edu
Mime-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Date: Wed, 21 Sep 1994 14:20:11 +0000
To: fritz@rodin.wustl.edu (Fritz Lehmann), cg@cs.umn.edu,
edi-new@tegsun.harvard.edu, srkb@cs.umbc.edu
From: phayes@cs.uiuc.edu (Pat Hayes)
Subject: Re: Ontology for EDI (was Frames...)
Cc: agc@scs.leeds.ac.uk, pdoudna@aol.com
Sender: owner-srkb@cs.umbc.edu
Precedence: bulk
At 8:43 AM 9/21/94 -0500, Fritz Lehmann wrote:
> Patrick Hayes, answering my earlier message on Frames and EDI,
>wrote:
>----begin quote----
>However, long experience suggests that the idea of there being a=
single
>correct real-world ontology is overoptimistic. Almost any concept=
you can
>think of, even very 'basic' ones, can be described perfectly correctly=
in
>several different incompatible ways...........
>----end quote----
>
> I agree completely, but it is not too obvious what actually=
=20
>causes the problem. I have some candidate causes:
>
> 1. An obstacle to integrating differing ontologies
>(which I have long insisted is a necessary ongoing task) is
>the supposed need for precise logical equivalence between=20
>concepts as defined in both systems. We do not demand this
>of natural language translation; if I say "the table" to
>a Frenchman it is certainly likely that there will be some
>borderline cases where my concept diverges from his "le table",
>but almost all instances of one concept will also be instances=20
>of the other.=20
You dont need to go to French: if you say "table" to another English
speaker the odds are that your concepts will differ slightly somewhere.=
(My
carpet in/partof a room arose in a discussion between native English
speakers, and each found it hard to beleive that the other could=
possibly
not see how OBVIOUS it was that they were right.
I agree, it doesnt matter for human conversation: but thats precisely
because the communicators in such conversations are intelligent and=
are
able to repair minor gaps or faults in what they hear. But we cannot=
make
such assumptions of our ontologies. An ontology is just a set of=
axioms, to
a machine, and our logics care very much about whether their axiom=
sets are
consistent or not. They may not have DEFINITIONS of the concepts=
in them -
I agree that to give such exact definitions is almost always impossible=
-
but they do make exact assertions about them.
The time models indeed often are based on intuitions which differ=
about the
status of points. If we take a grand semantic overview of these axioms,
then what you say is true about the differences not really mattering=
in
almost all cases. But the axioms themselves don't - and CANT, in=
fact -
take such a grand view of their own models; they are just plain
INCONSISTENT with one another. Can a point be regarded as an infinitesimal
interval, for example, or not? Well, it really doesnt matter; you=
can take
it either way. But you do have to choose one way or the other, because=
in
one case two intervals with a point between them meet, and in the=
other
case they don't. If you just allow both sets of axioms, you can prove=
that
2+2=3D5. The fact that the issue is not of practical importance was=
exactly
my point. HOw can I write axioms (or something) which say, "I dont=A0care
about this issue", without either saying nothing, or using impossibly=
large
disjunctions?
This was exactly the point I was after. When we write axioms, we=
are forced
into taking decisions about matters that I would prefer to just not=
take a
decision about. And this doesnt happen becuase of writing definitions=
for
things, it happens because we need to say something useful and consistent.
There is very large overlap, even if the precise,
>painstaking logical definitions (in terms of shared low-level
>primitives) are logically inequivalent.=20
YEs, exactly. You are restating my problem. How can we say the common
overlap without comitting ourselves to one or the other inequivalent=
(and
often inconsistent) detailed description?
.....
> Still, we manage.
>
Yes, I know we manage. How do we do it? Observing that people are=
smart
doesnt solve the AI problem.
> ........ Most
>differences which arise this way will again be borderline
>cases (e.g. the currency exchange rather than the candy store).
>In the field of knowledge acquisition, and in the CYC
>project, much thought has gone into reconciling different
>conceptualizations of a domain. It may be that the true
>disagreements on ontology are few, and that most of the=20
>problem is with the intended meanings of words.
This is probably correct in a sense. If pepole find they disagree=
on
meaning and that it matters in the conversation, they have all kinds=
of
ways of negotiating around the differences. Often they will agree=
to use a
word in a special way ("..a contract - in this sense - is of course
binding, so...") I think that almost any disagreement on meaning=
can
therefre appear to be a difference in word meaning. BUt now this=
leads to
the following problem for the knowledge formaliser, which is that
apparently clear concepts divide up into a sort of cloud or family=
of
closely related, but not actually identical, concepts. The idea of
'context' has been put forward as a solution to this problem, but=
I havnt
yet seen a convincing account of how information can be moved or=
shared
between contexts.
> The current EDI Standards (X12 and EDIFACT) give
>no _definitions_ at all for most concepts. It is baffling
>to see, in EDIFACT for example, that 8249:1 (Equipment
>Status: Continental) is "self-explanatory". I do not
>find it so! In many cases a short word or phrase is
>given in English; sometimes it's obvious what is meant,
>sometimes not. In most of the understandable cases,
>it seems that a conceptual definition (using a good
>stock of formally defined concepts) is feasible and=20
>would be very helpful to the uninitiated just as a
>form of documentation, let alone for the machine-processing
>and automated integration capabilities.
>
Oh, I agree. But this is quite a different task from formalising=
the ideas
in ways that permit machine inference.
By the way,it is suprisingly difficult to come up with short, neat
statements of what a concept means, even when you have them clear=
in your
own mind. The problem often sems to be that an explanation requires=
so much
background explanation of the presuppositions that make the concept
coherent. Just try doing it for a few ordinary words (sky, fight,=
message
and gas, say). I bet this is why so many of the explanations just=
give up.
Pat
----------------------------------------------------------------------------
Beckman Institute (217)244 1616=
office
405 North Mathews Avenue (217)328 3947 or (415)855 9043=
home
Urbana, IL. 61801 (217)244 8371=
fax =20
Phayes@cs.uiuc.edu=20
----------------------------------------------------------------------------