Re: Points of agreement & disagreement

Pat Hayes <hayes@sumex-aim.stanford.edu>
Date: Thu, 13 Feb 1992 19:29:02 PST
From: Pat Hayes <hayes@sumex-aim.stanford.edu>
Subject: Re: Points of agreement & disagreement
To: sowa@watson.ibm.com
Cc: INTERLINGUA@ISI.EDU, SRKB@ISI.EDU, CG@CS.UMN.EDU,
        Pat Hayes <hayes@sumex-aim.stanford.edu>
In-reply-to: Your message of Wed, 12 Feb 92 16:57:17 EST
Message-id: <MacMS.30590.5758.hayes@sumex-aim.stanford.edu>


John,

OK, last message from me on this stuff. 

Most of what you say is fine and I don't disagree with you: you are simply
expounding the general cognitive science research program, so of course I agree
with most of what you say.  There is just one little nagging disagreement about
the right way to go. I don't think I will be able to persuade you (or would
even want to, in fact), but I would be happier if you would acknowledge that
this difference is possible. 

Let me try to state it as clearly as I can. We agree that we are interested in
computational modelling of cognition in some broad sense, and that things said
in language are a valuable, perhaps the most valuable, source of insight into
what the structure of conceptual space is. The issue is how closely the
representational formalism used to express conceptual structure must be
constrained by the details of surface syntactic structure. You make a
methodological assumption that every surface variation must be somehow mirrored
by a difference in conceptual form. I take it as far less clear, and indeed
unlikely. I find your conclusion here somewhat ridiculous, for example: 

>> I can't help noting, for example, that if we take Bollinger's dictum
>> seriously, then EVERY difference Dixon has found in surface sytax in ANY
>> language must somehow be mirrored in a distinction in the semantic language.
>
>That's true.  I would expect that a fully detailed representation that
>tried to capture every nuance of meaning would have to do that. 

I dont think that every poetic variation in how an idea is expressed does in
fact constitute a nuance of meaning. If we are to look for every nuance of
meaning, I would be more inclined to look at the kinds of visual form that can
be perceived and about which conclusions can be drawn, for example.( I will
place a bet that you think that we have different internal representations for
things seen and for things heard?)

(In passing, I am actually more confused about your stance on this issue after
your last long note, since you also deny it:
>> We were disagreeing about whether the
>> knowledge representation language should, as a matter of doctrine, have a
>> representational distinction corresponding to every surface distinction of
>> English. You say, essential: I say, unnecessary and potentially misleading.
>
>No, I didn't say that.)( You didnt? Sure seemed like that. If you really
didn't then we are probably arguing past each other.)

I don't think that you are looking ONLY at syntax, by the way, only that you
are not giving yourself the freedom to ignore some syntax as irrelevant. I
think the representational language is probably not too much like natural
language: or at least, that we should be free to hypothesise representational
languages which are different in important respects from the languages that
have evolved for communication between agents.

-----------

Now a few last replies to parts of your last message and comments back.

1. - 7.  Sure, mostly I agree, of course. However...

> I'll certainly give you an exclusive-OR if you give me a lambda.

Not a fair exchange! If I give you lambdas will you find a way to stop them
breeding?

> Many people are so put off by the syntax of predicate calculus that
> they invent those "odd syntaxes" that you dislike.

No, you miss the point. Let syntaxes blossom if it makes the users happy. What
I dislike are claims, explicit or implicit, that new syntaxes for logic are
something more than just improved user interfaces. I don't love FOL, believe
me, its thoroughly inadequate. But I havn't seem many improvements, and I've
seen lots and lots of syntactic variations being touted as improvements or
alternatives. We have some really hard problems in KR, and we need real
advances, not syntactic fancy dress for familiar ideas.  Lets not confuse
issues of user acceptability with those of representational language design.

> If you give me one hour for a tutorial
>    on Peirce's existential graphs, I will make you dissatisfied with
>    predicate calculus .

I already am dissatisfied, but I bet Pierce's graphs won't solve any of my
problems. Changes like this strike me like the difference between infix and
postfix notation: interesting, but they don't affect what I can and can't say
in the notation. 

> 5. Expressive power:  Since my ultimate goal is to develop a full
>    fledged semantic representation for all of natural language, I
>    need the ultimate in expressive power, including higher-order
>    indexical intensional modal temporal logic with presuppositions,
>    focus, and contexts.  But I would never try to infect the basic
>    core of KIF with such an unmanageable beast.  Even though I might
>    need all that apparatus in the intermediate stages of analyzing
>    language, the final stage that I would send off to a shared KB
>    would be a rather tame sorted FOL with the indexicals & such
>    resolved to simple constants and variables.

I find this very interesting. If you can get the 'final stage' of linguistic
analysis expressed in a tame sorted FOL, why do you need all that other stuff?
Thats what I mean by the conceptual content: what the sentence means AFTER one
has understood it, after all the linguistic analysis is over with. It sounds to
me like what you are saying is that you need all this elaborate apparatus not
to express meaning but to mirror surface syntax, in the spirit of Montague, so
that the process of linguistic comprehension can itself be modelled by
inference.  That is an interesting way to go, and I wish you luck, but it is a
very special way to think of natural language. 

> 6. Metametalanguages:  As I've said in other notes, I prefer to do
>    my reasoning in a fairly conventional FOL.  In order to reconcile
>    ....
>     I would handle
>    defaults, etc., as recommendations for things to add in a belief
>    revision or theory revision stage; but the actual reasoning would
>    be purely first-order.

I largely agree with your intuitions about nonmonotonicity: however, it is not
easy to see quite how it can be made to work in practice. The process of
context change is not first-order and needs to be integrated with the
conventional inference in an uncomfortably close way. (I only mean to emphasise
that talk of context change is not yet a solution to the problems of
nonmonotonicity.)

------
>> If we agree with
>> the Stanford philosophers that NL is essentially indexical in nature, then
>> LofT, since it is the vehicle for memory, cannot be similarly constructed or
>> our memories would have the same indexical quality and they would all seem
to
>> be about 'now'.
>
>No.  Let me give an example of how I would handle indexicals. ...

Yes. Your example illustrates my point exactly. To be placed in memory, the
indexicals must be replaced ('resolved') by 'antecedents for the # markers'. 
Exactly. Let me just emphasise that this had better be done pretty promptly for
the time # marker, or it will get the wrong value: the proposition will be
associated with the time you decided to record it rather than when it happened.
 If you record a temporal indexical and wait, it is impossible to retrieve its
appropriate value. And in general, one had better record in the same context
that the indexical proposition is stored something which will enable the
indexicals to be resolved correctly.

By the way, I have to remark that there is nothing in your example which needs
all the conceptual graphs stuff you cite (formula operator, situation box,
etc.). Just extend FOL by allowing indexical markers in argument places.

---

Thanks for the history. I didn't know that Russell picked up the notation from
Peano and only later 'rediscovered' Frege. I will go and see where in his
autobiography he mentions this.

--- 
>>> ...
>>> "color(x,red)" uses a second-order expression.  The second-order form
>>> makes it easier to quantify over colors and store them in a database.
>>
>> They both look first-order to me. The second is second-order only if a color
is
>> taken to be a property, which does not seem very plausible.
>
>But I do take color to be a property, ...

I think we are confused. You take COLOR to be a property, but not a particular
color, right?  After all, you..

>... would represent the phrase "a dazzlingly
>bright red shirt" with only first-order types:
>
>   [SHIRT]->(ATTR)->[RED]->(ATTR)->[BRIGHT]->(ATTR)->[DAZZLING].
>
>Ontologically, I would say that this graph means there exists a
>shirt, which has as attribute an instance of red, which has as
>attribute an instance of bright, which has as attribute an
>instance of dazzling...

..so evidently, the color of the shirt is just a first-order object like a
shirt or a degree of brightness.  Thats my point above: in writing
"color(x,red)" one has transformed "red" from a property to a real thing, has
first-orderised it.

By the way, I agree we should be ontologically promiscuous in this way when it
is convenient.

----

> ...Iris Tommelein,
>who used primitives derived from linguistic research as a basis for
>knowledge representation in civil engineering.  The basic types used
>in talking about electricity are also derived by a metaphorical extension
>of terms used to describe flowing water -- current, flow, resistance,
>pressure, etc. 

Qualitative physics is full of such metaphors, and many have been
systematically investigated by psychologists. But these are good examples of
the 'wisdom of the ancients' you mentioned: they have nothing to do with
language as such but with the conceptual structure of thought. I could just as
well, indeed, more correctly in my view, have said "..derived by a metaphorical
extension of ideas used to think about flowing water..".  We often - although
not always - access them by talking to people, but that does not seem to be of
any deep significance.  The (now controversial) work which found many people
having an Aristotelian intuition about dynamics, for example, used interviews
but was not concerned with the exact form of words people used to express
themselves.

---

> ...the choice of predicates may not change the truth-functional
>denotation of a formula, but it could make a very big difference in the
>ease of knowledge engineering, clarity of communication, volume of
>storage, efficiency of inference, etc.

Of course it might. But that is not what we were disagreeing about.  And again
I would suggest that there is no a priori reason to suppose that the choice
which is good for communication need be that which leads to efficient
inference.

--
>You need a strong, interdisciplinary background that includes logic,
>linguistics, AI, and philosophy.

I thought linguistics would be in your list somewhere. I would include
cognitive psychology and rate linguistics as useful and interesting but not
essential (unless one is planning to work on language, of course.)

> ...we are
>not being helped by linguists who denounce AI researchers as hackers
>or AI researchers who denounce linguists as irrelevant.

Oh come, I don't denounce anyone. And I slightly resent your 'we'. I don't
think that the field of cognitive science is helped by peple building armed
camps and insisting that unless people agree with them on everything they must
somehow be the enemy.  

Years ago I wrote a little article on deadly sins in AI, and one of them was
called "rally round the flag": insisting that everything must be stated in an
idiosyncratic ad-hoc formalism which is essentially equivalent to all the other
formalisms.  It is exasperating because in some sense it doesnt matter, since
they ARE all intertranslatable: but people feel that the use of this or that
formalism somehow solves a real problem; and that is, well, misleading to new
students.

---

>> John, you seem to be shifting around. On the one hand, we were concerned
with
>> what I took to be a scientific issue to do with the representation of
(models
>> of) human knowledge.  Now we are concerned with the detailed pragmatics of
>> engineering. Both are worthy of care, but they may not push in the same
>> directions. In particular, most 'common sense' reasoning is not concerned
with
>> models which can be made into databases.
>
>The multiple meanings of the word "model" have caused some semantic
>drift in this discussion.... And now you are
>talking about models of scientific knowledge.

No,there is no drift, and I am not talking about that. My point is a simple one
which has been made many times.  There is a difference, I suggest, between two
enterprises. One is essentially scientific, trying to model intuitive human
thinking: 'conceptual analysis' of common-sense reasoning and of everyday
language. The other, essentially engineering, is making efficient reasoning
systems which have useful applications.  These are not the same enterprise and
may sometimes point in different directions, and this is a plausible example of
such a divergence. Many economically important databases are complete in this
sense, and universal quantifiers can be interpreted as ranging over these
explicit closed worlds accessible to direct computational inspection, but this
is almost never true for cognitive modelling of intuitive human thinking.

--

OK, end of public arguments between us. The last word is yours, John....

Pat






-------