Practical effects of all this discussion
sowa <sowa@turing.pacss.binghamton.edu>
Date: Sat, 17 Apr 1993 17:49:44 -0700
Message-id: <9304180045.AA13176@turing.pacss.binghamton.edu>
Comment: List name: SRKB-LIST (do not use email address as name)
Originator: srkb-list@isi.edu
Errors-To: neches@ISI.EDU
Reply-To: <sowa@turing.pacss.binghamton.edu>
Sender: srkb-list@ISI.EDU
Version: 5.5 -- Copyright (c) 1991/92, Anastasios Kotsikonas
From: sowa <sowa@turing.pacss.binghamton.edu>
To: Multiple recipients of list <srkb-list@ISI.EDU>
Subject: Practical effects of all this discussion
The recent flurry of notes about possible worlds, models, etc.,
seems to have reached the point where all the discussants (Pat Hayes,
Len Schubert, Chris Menzel, Jim Fulton, and I) are too exhausted to
say anything more about the subject. But I received a couple of notes
>From people who asked what does it all have to do with the price of
cheese, or at least with AI programs and how we implement them.
For the benefit of skeptics who don't believe that philosophy has
anything to do with AI implementations, I'd like to summarize some
of the points that do have immediate relevance to programming issues:
1. All the discussion started from the issue of how we unquote
variables in quoted KIF statements. I had typed ,?x but Mike
said that I should have typed ,(name ?x), since an unquoted
variable is a physical object that may not be a component of a
purely lexical statement. I replied that if you consider the
individuals in a model to be purely lexical "surrogates" for
physical objects, then it is not necessary to use the "name"
function, since ,?x by itself would be a lexical item.
2. Len Schubert responded by raising a question whether it is
even necessary to use a comma in the form ,?x and whether ?x
by itself could be used in a quoted formula. That is a detail
that the KIF designers will have to resolve, but Len also
raised a question about the nature of propositions. I responded
with a definition of "proposition" as an equivalence class of
statements in two or more languages. This issue has several
immediate programming consequences:
a) Do we introduce the type "proposition" into our ontologies
that we express in KIF, CGs, and other AI languages?
b) If we do have a type "proposition", what are the criteria
for determining whether two different statements express the
"same proposition". Provable equivalence is too coarse a
criterion, since it causes all tautologies to degenerate to
the single constant T; it is also too inefficient a criterion
since it can be NP complete or even undecidable. We would
like a criterion that (i) is easy to compute, (ii) makes finer
distinctions than provable equivalence, and (iii) corresponds
to our intuitive notions of what it means for two statements to
"say the same thing".
c) We need such criteria when implementing systems that reason
about beliefs (either their own or other agents' beliefs).
If we tell Tom and his computer the proposition p (in some
AI language), we would like to assume that they "know" p.
But if they are using a different knowledge representation
language L, how can we be sure that the translation from our
language to L preserves the meaning that we intended?
3. Chris Menzel pointed out that the principal distinction between
the Kripke-Montague possible worlds and the Barwise-Perry situations
is totality vs. partiality. A possible world is supposed to include
sufficient information to give an answer T or F as the denotation for
every possible proposition. But a situation is never intended to
cover everything -- it is closer to the "open-world models" of
database theory, and in practice it can be very much smaller than
most implemented databases. This distinction makes a very big
difference in how we organize, implement, and use databases and
knowledge bases.
4. Throughout the discussion, I maintained that all of our models
are approximations that abstract a limited number of features from
the world. For different purposes, we might construct different
models, all of which are "true" in some sense, even though they
might be inconsistent with one another. The possibility of having
different models for different purposes provides a great deal of
flexibility, but it requires an implementation that can accommodate
different models of the same situation or world.
5. The philosophical position that names denote things in the world
leads to the conclusion that identity statements like "Dr. Jekyl
is Mr. Hyde" are semantically trivial. The position that names
are related only indirectly to things can accommodate different
models in the heads (or computers) of different agents. In such
an interpretation, an identity statement causes the listener to
merge or relabel his/her/its internal surrogates for external
entities. This too has major consequences for our implementations.
Any of these five points can be broken down into subpoints and related
issues that could provide fuel for an indefinite number of discussions.
But I just wanted to show that such philosophical issues can make
important differences in how we implement AI systems.
John