The meaning of "meaning"
sowa@watson.ibm.com
Message-id: <9202071939.AA19813@cs.umn.edu>
Reply-To: cg@cs.umn.edu
Date: Fri, 7 Feb 92 14:38:40 EST
From: sowa@watson.ibm.com
To: hayes@sumex-aim.stanford.edu
Cc: srkb@isi.edu, interlingua@isi.edu, cg@cs.umn.edu
Subject: The meaning of "meaning"
Pat,
I think that this discussion is getting at some crucial issues,
as Karen Jensen (from Microsoft, formerly with IBM) commented
in a note to me:
> John, I find this entry from Pat Hayes fascinating and
> also astounding. I'm grateful that you (a NL person) are
> in there slugging it out with the standards.
Since I believe this discussion touches on some of the central problems
in KR, I would like to continue it in the open forums (with apologies
to people who prefer skinny mailbox lists).
Continuing with your comments:
> You celebrate the idea of having surface syntactic
> differences in English mirrored in differences in the formalism, which is
> anathema for me since it would mean two different sentences could never have
> the same meaning.
The linguist Dwight Bollinger coined a famous slogan "Every difference
makes a difference." He made that point in answer to Chomsky's claim
that certain transformations (e.g. active-passive) preserved "meaning".
Bollinger agreed that such transformations preserve the truth value of
the sentence, but he claimed that there is much more to meaning than
just truth value. One such component is presupposition and focus, where
you say "This is what I assume we are talking about" and "This is what I
want to add to the knowledge base in your head."
> You talk of the mapping to and from natural language as being
> a matter of semantics, while I regard it as largely a matter of linguistics.
I regard semantics as the study of meaning. The denotation of sentence
in a model-theoretic sense is a very important component of meaning, but
I agree with Bollinger in claiming that there is a lot more to meaning.
> You go from Quine's bound-variable dictum directly to talk of verbs and nouns
> in a way which to me is puzzling since I see no reason to impose categories
> which arise in the grammatical theory of one of the world's many spoken
> languages into the basis of a Krep discussion. And so on. We are playing
> different games. I wish you luck with your game, but I don't see what natural
> language has to do with the KIF effort.
I agree that natural language is not a major issue for the KIF designers,
but the Knowledge Sharing Effort also includes ontology, for which some
guidance from NL is of vital importance. I just used English as one
example. In many of my papers, I have emphasized the need to look for
common principles underlying as many languages as linguists are able to
analyze. Following is a citation and brief comment about one such study,
which I sent to the cg list last year:
> R. M. W. Dixon, A New Approach to English Grammar on Semantic
> Principles, Oxford University Press, 1991.
>
> Unlike many linguists who work only with English, Dixon has been
> doing field work with many different languages, especially among the
> Australian aborigines. He just returned from work on the native
> languages of Brazil. He developed his semantic categories on the
> basis of large numbers of examples from various languages, but this
> book applies the principles mainly to English.
Dixon, being a linguist, uses the term "semantics" in a broad sense.
>From a logician's point of view, his book might be considered a study
in the underlying ontologies of various natural languages.
Following are some comments on your more detailed comments on my comments:
>> ... For the English phrase "a red ball",
>> I would normally use the following conceptual graph,
>>
>> [BALL]->(ATTR)->[RED],
>>
>> which has the translation
>>
>> (Ex:ball)(Ey:red) attr(x,y),
>>
>> which means "There exists a ball x, there exists a patch of red y,
>> and x has attribute y."
> That you think in terms of starting with an English phrase illustrates the
> difference between the ways we think. However, I find this sort of way of
> expressing the idea quite reasonable, since of course we can think about the
> color of the ball. ( Not the patch of color, but the way, but the color itself:
> perhaps we differ in our intuitions here. A 'patch of color' is a very tricky
> concept, especially when applied ot a 3-d object. And it doesnt seem quite
> right, since "a red ball" isnt an assertion, so doesnt have any existential
> import. One can say for example, "I never had a red ball", and one would not
> want to translate this into an existential. )
In that context, you would definitely want to translate "a red ball"
into an existential. The negation would come from the word "never",
which would govern the context containing that existential.
> But, more substantively, since this has a perfectly good way of being written
> in first-order logic, why introduce (yet) another odd syntactic form? We have
> too much odd syntax in computer science already.
The question is which syntax is odd. Predicate calculus is based
on C. S. Peirce's first effort to represent full FOL (his notation
of 1883). By 1897, he had scrapped that notation in favor of his
existential graphs, which he called "The logic of the future".
I prefer Peirce's revised, improved logic to his first attempt.
>>...which corresponds to the English phrase "a ball of color red"....
>
> I am really at a loss to see what the difference in meaning between these two
> noun phrases could possibly be. I believe you have hallucinated it.
The question of how to represent colors, shapes, etc., is an extremely
serious issue for databases and knowledge bases. You will find all kinds
of examples in the literature where some people write "red(x)" and others
write "color(x,red)". You need some criteria for deciding when to use
one form or the other as well as formal transformations for mapping one
into the other. I would say that both forms are truth-functionally
equivalent, but that "red(x)" uses a first-order expression, while
"color(x,red)" uses a second-order expression. The second-order form
makes it easier to quantify over colors and store them in a database.
>>.. If the English phrase varies, you
>> get different representations, as in the examples "a red ball" vs.
>> "a ball of color red".
>
> Exactly, this seems obviously wrong. I can say the same thing in English in
> all sorts of ways. ( eg I way not be able to recall the exact form of words
> that was used to tell me that Jim has measles, but I will recall the fact very
> clearly.) If you take it as axiomatic that surface differences must map onto
> representational differences than you will be obliged to import all sorts of
> unnecessary confusion into the representational system, which is difficult and
> complicated enough already!
The reason for the current confusion is that there are no clear
guidelines for deciding how to choose the predicates. What I have
been trying to do is to state guidelines for choosing predicates based
on linguistic criteria and then use transformations based on lambda
calculus for translating one choice of predicates into another.
> But this can be done simply with axioms which define one in terms of
> the other. For example, we might write
>
> (Forall x)( color(x,red) iff red(x) )
>
> or, more ambitiously, something like
>
> color-predicate(x) implies (Forall y)( x(y) iff color(y,x) )
>
> perhaps suitably sugared with some kind of "apply" relation if one is
> scared of a little second-order quantification.
That's true. But then you need a separate axiom for each color.
And you need more axioms for relating "block(x)" to "shape(x,block)".
What I was recommending is the ATTR (attribute) relation for linking
an entity to a concept derived from an adjective and the CHRC
(characteristic) relation for linking an entity to a second-order type
like COLOR, SHAPE, etc. Then you can do all the transformations with
one lambda definition that relates CHRC to the ATTR plus KIND relations.
(Of course, you also need a dictionary that says which words in English
are second-order terms, etc. Nothing is ever free, but at least there
is a way to put some order in the process.)
>> Whether a philosophical difference is semantic depends on your definition
>> of semantics. I agree that it makes no truth-functional difference, but
>> it does make a difference in your mapping to and from natural language
>> and in what you consider the principal categories of your ontology.
>
> This idea of 'principal categories' is something that you insist on being
> important, but concede has no operational significance. You seem to me to just
> be repeating yourself. Look, I agree it has an intuitive significance. But in
> order to exploit this intuition we need to somehow come up with an
> operationally significant way in which these intuitive differences matter. Its
> not enough to just assert they make sense and invent arbitrary syntax to encode
> them, if those syntactic differences don't affect the meaning.
Yes, I keep repeating again and again that it has no truth-functional
difference. But I also keep showing that it has operational significance
in how you do knowledge engineering, how you choose your axioms, how
you translate one choice of predicates into another, etc.
The design of KIF is one important part of the Knowledge Sharing Effort,
but many if not most of the people have been saying that the issues of
ontology are just as important if not more so.
>> I prefer to make verbs into quantifiable types,
>> since you can say things like "A cat chased a mouse. The chase
>> lasted 14 seconds," where you refer to an action by a verb in one
>> sentence and refer back to it with a noun in another sentence.
>> Since you can never be sure when someone is going to reify one of
>> your verbs, I like to reify them all by default.
>
> Well, I think I agree but in different terminology. I think actions are
> entities and should be accessible to quantification, largely for the reasons
> discussed by Donald Davidson, ie one can go on qualifying an action report for
> ever: he did it at midnight, in the kitchen, slowly, with a knife... .But I
> wouldnt talk about reifying verbs here: 'verb' is a syntactic category, not a
> semantic one.
Yes, I agree. When I am using the formal apparatus of conceptual graphs,
I say "Verbs should be represented by concepts, not relations."
> We also agree that this is an ontological issue, not a logical one, right?
Yes.
>> But if you have
>> a knowledge representation that treats verbs as relations, you can
>> use the lambda expressions to map back and forth.
>
> I agree, and think the lambda-expressions are a good way to encode such
> translations. You have to watch out, though, because if you are too free with
> lambda then the language becomes impossibly unreasonable for machine reasoning,
> eg one finds things like infinite lattices of unifiers with no top element,
> etc.. Types help: again, a computational issue.
Yes, but it is also an issue in ontology and lexicography, since most
type hierarchies that one finds implicit in NL dictionaries are very
bushy, but not very deep -- the top is usually not very far away.
>>>> Therefore, they are truth-functionally
>>>> equivalent. But ontologically, they make different assumptions
>>>> about the nature of reality.
>>
>>> But then it must be that the way in which they differ in meaning is not
>>> accessible to the inference system. So it is merely philosophical
>>> decoration.
>
>>I agree that they are truth-functionally equivalent, but there is still
>>an ontological difference: In one case, you are quantifying over balls
>>and saying that the one you picked is red. In the other case, you are
>>quantifying over patches of redness and saying that the one you picked
>>is a ball.
>
> Again you seem to me to be just repeating yourself. You see a difference where
> I see none. What justifies your claim that these statements should be
> considered in any way different in meaning?
The choice of predicates may not make a difference in truth value, but
it is one of the most critical aspects of knowledge engineering. We might
agree that describing a house in polar coordinates or rectangular coordinates
makes no difference in meaning. But just try giving your local contractor
a set of house plans in polar coordinates.
>> According to Quine's dictum, that makes a difference in
>> what categories you admit into your ontology.
>
> No, Quines dictum wont work, because when you translate the first sentence into
> logic (or, as I would prefer to say, when you write down in logic what it
> means), both the balls and the redness are quantified over, as you have
> pointed out. ( In fact, if you 'reify' sufficiently, Quine's dictum becomes
> almost vacuous.) You have used the dictum on the surface English form, but
> that is not where it should be used.
>
> I don't expect us to agree on this, by the way, but can you see why I am not
> persuaded by what you say?
Yes, but again I would say that our differences result from what we
consider important for knowledge engineering.
>> I have been looking for logical
>> structures that have a simple and direct mapping to natural languages.
>> That is certainly not a guideline that motivated Russell & Whitehead,
>> but I believe that it can lead you to logical forms that are just as
>> sound, just as computable, and a lot easier to map to and from NL.
>
> For me, the use of a krep language is to support appropriate inferences. Again,
> I think this illustrates why we differ: our goals are different. The mapping to
> NL is of secondary interest to me, and can often lead along misleading
> directions.
I agree that we need a clear statement of goals and requirements for the
entire Knowledge Sharing Effort. Supporting inference is certainly
important. But two people who are using the same KR language can't share
a KB unless they choose the same types and relations or have some rules
for translating one choice into another.
When I was teaching a course at Stanford in 1987, Iris Tommelein was
working on her PhD in civil engineering and needed some guidelines for
representing spatial relationships. I suggested some references to
the linguistic literature on representing prepositions for spatial
relationships. She found that very helpful in choosing her set of
primitives.
Natural languages are a goldmine of information about the implicit
ontologies that generations of people have found useful for representing
everything that is important in their lives. Just because some naive
prospectors may have settled on iron pyrites is no reason why trained
prospectors should stop looking for gold.
>> If you have two typed languages A and B,
>> and you try to map A -> KIF -> B, it can be very difficult to determine
>> which implications in KIF came from universal quantifiers in A and should
>> go back into universal quantifiers in B. In my example, it was hard
>> enough for "Every cat is on a mat" but just look what happens when you
>> try to represent "Every chicken coop has a chicken that is pecked by
>> every other chicken in the coop."
>
> I agree: if one wants types, it should be possible to 'push them through' KIF,
> so KIF needs to be able to somehow encode them.
>
>> And while we are talking about quantifiers, I would like to see the
>> Kleene E!x for exactly one and the Whitehead & Russell E!!x for unique.
>
> Good idea. I would also like to see some way of writing exclusive-OR.
I'm glad that we agree on these points, because they are the most
significant ones for the design of KIF. The other issues are ones that
I would have to take up with the SRKB people who are working on ontology.
>> Nothing that can be computed or stored on a computer is ever infinite.
>
> Look, we must be misunderstanding one another. My point was only that the
> domains of models are typically not computational entities. In the case of
> databases with a closed-world assumption, things can be interpreted
> differently, but most of what we know about the world and communicate to one
> another in NL is not linked to exhaustive lists of ground facts in this
> manner. I think we agree here, in fact.
We agree pretty much in principle, but I would say evaluating denotations
in terms of models is of immense economic importance. Since it can
be done in polynomial time, it is not as challenging an issue for AI
research, but it is the essence of DB processing -- open world as well
as closed world. So I would want to keep it high on the list of goals
and requirements.
>> But as I said earlier, a sorted logic simplifies the formulas and can
>> make KIF a better intermediate language, especially for translations
>> to and from other sorted or typed languages.
>
> Yes, I agree. Glad you didnt mention natural language...
If it helps to end on a point of agreement, I'll stop here.
John