Re: Knowledge languages vs. programming languages
Pat Hayes <hayes@sumex-aim.stanford.edu>
Date: Tue, Mar 10, 1992 1:52:33 PM PDT
From: Pat Hayes <hayes@sumex-aim.stanford.edu>
To: sowa@watson.ibm.com
Subject: Re: Knowledge languages vs. programming languages
Cc: INTERLINGUA@ISI.EDU, SRKB@ISI.EDU
Message-id: <Mailstrom.B40.28577.26357.hayes@sumex-aim.stanford.edu>
In-reply-to: Your message <199202291224.AA01640@quark.isi.edu> of Sat, 29
Feb 92 07:19:36 EST
Content-Type: TEXT/plain; charset=US-ASCII
John-
I liked your message, but have a few comments. This distinction is intuitively
compelling (although rejected by some) but notoriously hard to make exact, and
unfortunately the basic idea which you nicely expound in this message has
problems under strong examination.
For a start, Herbrand showed us that 1PC, and probably pretty much any
knowledge language with a model theory, CAN be interpreted as talking about
its own symbols: ie, if it has a model at all, it has one made of symbols
themselves. 'Grounding' is motivated in part by the felt need to somehow
guarantee that such symbolic interpretations are ruled out. Your switch from
the variables in L being "intended to refer to" somethings to the variables
"refer(ring) to things in T" in the next sentence illustrates the problem
nicely: we might intend with all our might, but that doesn't guarantee actual
reference.
Second, why should a knowledge langauge not have some reflexive abilities to
refer to its own expressions? You say that English is a knowledge language,
but it can certainly refer to English expressions: there is a word for "word",
for example. More technically, the CYCL system, which I think would safely be
put on the Knowledge side of the fence, has categories for all the
datastructures which are used to implement it. Richard Wehyrauch and Frank
Brown have both developed systems which are clearly assertional but can
self-refer to their own structures. So the distinction in terms of
subject-matter doesnt really hold together.
Third, you refer to strong typing as characteristic of knowledge languages.
But surely this is almost completely orthogonal to the distinction being
discussed. Many programming languages are highly typed with strong runtime
type-checking and no freedom to violate the boundaries. Prolog is in fact a
rather unusual programming language in this respect. And ther are plenty of
knowledge-representation languages which have no especial type structure,
although they often give the user the ability to create a sort structure,
because this is often useful. And I think it is arguable that most natural
languages are not strongly typed in this way. I can mix categories in English
with results which might be unusual but are not ill-formed, and often in fact
used, eg as in such phrases as "a hairy problem" or "fat cash". You say that
"This is why" you believe that strong typing is an essential part of a
knowledge language, but you don't say why, you just tell us that its part of
your definition.
Let me suggest that what makes Prolog a programming language is in chiefly
because it comes with a fixed interpreter, and this interpreter's behavior is
an essential part of the meaning of the language. That is, it is a language
part of whose meaning has to do with the way a machine manipulates its
expressions. Whereas there isnt any mnachine in a Krep languages semantic
story, unless of course it happens tro be in part about a mchine: but thats a
different kind of relationship.
Any thoughts on this?
Pat