Contexts and quantifiers in KIF
sowa <sowa@turing.pacss.binghamton.edu>
Date: Fri, 9 Apr 93 01:31:41 EDT
From: sowa <sowa@turing.pacss.binghamton.edu>
Message-id: <9304090531.AA04335@turing.pacss.binghamton.edu>
To: cg@cs.umn.edu, interlingua@ISI.EDU, srkb@ISI.EDU
Subject: Contexts and quantifiers in KIF
Cc: sowa@turing.pacss.binghamton.edu
First, some replies to Pat Hayes, who had the longest comments on
my note about the discussion with Mike Genesereth:
> Data structures are stored inside computers and so must be finite....
Yes, but I wanted to emphasize the point that Tarski-style models
are artificial constructions (usually set theoretic, but they could
be based on mereology as well) that have a stronger affinity with
data structures than with physical objects.
> Treating model domains as consisting of lexical objects, amounts
> to treating the quantifiers as lexical....
No. A model consists of a set of elements and a set of relations
over those elements. The interpretation of the quantifiers does not
depend on what those elements are made of. But it does simplify the
treatment of quoted formulas if both the formulas and the things that
the quantifiers range over happen to be made of the same kind of
lexical stuff.
> ...it leaves no way to refer to the actual world...
In Chapter 1 of _Conceptual Structures_, I reproduced Ogden and
Richards' meaning triangle, which shows a direct relationship
between words and concepts, another direct relationship between
concepts and things, and only an indirect relationship between
words and things. That meaning triangle has a long and honorable
history going back to Aristotle's three-way distinction between
words, "experiences in the psyche", and things.
The lexical items in my models are the things that correspond to
Aristotle's "experiences in the psyche" and to Ogden & Richards'
concepts. This approach, like Aristotle's and O&R's, makes the
relationship between words and things indirect, but you can still
use words to talk about things. The indirect approach does, however,
give you a lot more flexibility in changing your models for different
purposes. And I believe that changing models is a much better way
of doing nonmonotonic reasoning than changing logics. However,
that comment could easily get us off into another endless series
of notes.
> KIF... wasn't intended to be all that readable.
I grant that readability by humans is less important for KIF
than ease of processing by computers. But I completely disagree
with the following point:
> Thus your suggestion for extended quantifiers are not really
> relevant to KIF.
One of the main reasons why I want the extended quantifiers is to
simplify the translations to and from KIF. I can translate a
CG concept like [PYRAMID: {*}@3] into a complex expression in KIF
without too much trouble. But the hard part is to take a complex
expression in KIF and guess whether it came from a simple or a
complex expression in CGs (or other language that might have
extended quantifiers).
> I would rather write three pyramid quantifiers.... How can we
> automatically translate between this way of expressing the
> situation and any of your exists-set-count-three methods?
I grant that you don't need set notation if you are only talkig
about two or three things. But if you have a trailer truck,
it is a lot easier to write (and reason about) [WHEEL: {*}@18]
than to write out a long string of separate variables.
Automatically translating from the set notation to the string of
variables is straightforward. Translating back to sets, however,
is much harder.
> Perhaps the most useful facility would be a quantifier defining
> ability.
Mike and I discussed that possibility. Both of us have flexible
metalanguage facilities for KIF and CGs, but neither of us can
use them to define new kinds of quantifiers. If anyone can suggest
a simple, elegant, computable, consistent, nonparadoxical way of
doing that, we might consider it.
In any case, there is a precedent for introducing such quantifiers.
In Section 37 of the Principia, Whitehead & Russell introduced
relational operators E! for exactly one and E!! for unique.
They didn't have a metalanguage for introducing them, so they
just used brute force -- i.e. English.
Jim Fulton also had a number of comments:
> Formally, the questions you are asking about quantifiers are
> the subject matter of modal logic....
Yes, but both Mike and I believe that the modal constructs can
be defined by means of the metalanguage capabilities that are
already present in CGs and KIF.
> The extension of the predicate 'dog' is the set of all dogs...
I agree that Frege and many others would say so. Tarski's
famous paper on model theory, however, was explicitly titled
"On the concept of truth in formalized languages." In that
paper, he started out with the example "Snow is white", but
his whole formal treatment addressed the issue of mapping
classical first-order logic to set-theoretic models. Tarski
never addressed the question of mapping natural languages to
the real world. That problem was addressed by Tarski's
student Richard Montague. However, I don't believe that
Montague really solved the problem, since his so-called
"fragment of English" was a highly artificial language
with a total vocabulary of only 37 words; and his so-called
possible worlds were just as surely set theoretic constructions
as Tarski's.
> Our beliefs... are about things in the real world...
Yes, but as I said above, I don't believe that the relationship
between word and object is a direct one. I would say that no
sentence has an absolute mapping to the world. Instead, the
mapping is always dependent on some model. Depending on the
model, the same sentence could be either true or false of the
same situation. In Chapter 1 of _Conceptual Structures_, I gave
the example "This steak weighs 12 ounces." If your purpose is
to select something for dinner and the standard of weight is a
butcher scale, that sentence may very well be "true". But if
your purpose is to compute protein and fat content and your
standard of weight is a precision balance, that same sentence
could be false when applied to exactly the same situation.
> The unification of models is significantly facilitated by
> sorted quantifiers.
Yes, I wholeheartedly agree. If languages A and B both have
types or sorts, the mapping A -> KIF -> B is much simpler if
KIF recognizes types syntactically. I'm willing to accept a
semantics that blurs the distinction between types and monadic
predicates as long as the syntax clearly marks them.
Bob MacGregor raised some points about the nesting of parentheses
in the lists of variables for typed KIF, and Pat Hayes and
Mike Genesereth commented on the issue. Although I have a preference
for a simple syntax, I'll leave the problem to the KIF designers to
determine what is "simple".
John Sowa