the great model debate
schubert@cs.rochester.edu
Date: Tue, 11 May 93 00:27:35 -0400
From: schubert@cs.rochester.edu
Message-id: <9305110427.AA14548@ash.cs.rochester.edu>
To: interlingua@ISI.EDU, sowa@turing.pacss.binghamton.edu
Subject: the great model debate
Pat Hayes has been arguing the view (shared by me) that models of
logical knowledge bases (such as KIF knowledge bases) can be based
on a domain of real-world objects, such as cats or trees; i.e.,
the individual constants may be interpreted as denoting such objects,
predicates may be interpreted as sets of such objects (or sets of
tuples of them), etc.
In opposition to this view, John Sowa has been arguing that
1. Such models are impossible to set up, since they presuppose cat-
or tree- (etc.) reconition procedures; and
2. Therefore, to set up a semantic model of a logical knowledge base
we need to interpose something more sharply delineated, something
unambiguously individuated and structured, between the logic and
the world: a "depiction" consisting of either image-like or db-like
data structures. (When we're not talking about computer
implementations, then according to John these depictions can
also be abstract mathematical structures -- again involving no
objects from the rough-and-tumble world out there, and he contends
that that's what the founders of model theory had in mind.)
(Sorry if I'm oversimplifying, John)
Apart from occasional interjections aimed at a small subgroup of
"interlingua", I've tried to stay out of this, since I thought Pat
was doing a great job of defending the view I subscribe to, and because
I was well aware of John's unparalleled zest and endurance in email-
debating.
However, some of John's messages went out to all of "interlingua",
including some alluding to some of my interjections. I had made the
point that there is a use of the term "model" which is current in AI,
meaning something like John's "depictions" -- namely, auxiliary
representations used to support efficient inference (or in some
cases to serve as a surrogate world, as in Winograd's BLOCKS world).
I infer that John took this observation as evidence that conventional
practice in AI favored his own position (2).
However, my whole point had been to distinguish these "models" in AI
>From the logicians' models. From a logical perspective, they are
themselves REPRESENTATIONS, and as such in need of a semantics (if we
want to regard them as encoding knowledge ABOUT the world). This
semantics can either be supplied in a direct, Tarski-like fashion, or
indirectly through a specification of the set of logical sentences
implicitly encoded by any given "depiction". Though I've taken much
interest in hybrid systems using multiple representations for efficient
inference, I'm with Pat in rejecting (1) and (2). Concerning (1), my
most recent message said (referring back to earlier exchanges on
logic, physics, and just about everything else),
>> If you want to put cats and mats in your models, then you must
>> formalize the process of recogizing cats and mats. [John Sowa]
>
> Absolutely not. This is true neither in logic nor in physics. I think
> when Newton wrote F = ma, he was proposing this as applying to actual
> objects subject to actual forces (he seemed to think, for instance,
> that this law applied to the motions of the planets). Do you fault him
> for not having supplied formalized apple- or planet-recognition
> procedures?
Len Schubert
P.S. I'll do my best to keep quiet henceforth, esp. to the interlingua
group at large.