Re: models and depictions

sowa <sowa@turing.pacss.binghamton.edu>
Date: Sun, 16 May 93 22:08:21 EDT
From: sowa <sowa@turing.pacss.binghamton.edu>
Message-id: <9305170208.AA06292@turing.pacss.binghamton.edu>
To: cg@cs.umn.edu, interlingua@ISI.EDU
Subject: Re:  models and depictions
Cc: sowa@turing.pacss.binghamton.edu
Dan,

A couple of comments on your note:

> ... To my mind, when
> Tarski gave his famous ``Snow is White'' example, he was clearly
> allowing that formal symbols refer directly to real world objects.

Tarski's example came from Section 1 of his famous paper "The concept
of Truth in Formalized Languages" (reprinted in _Logic, Semantics,
Metamathematics_, by Alfred Tarski, Hackett Publishing Co., Second 
Edition, 1983, pp. 152-278).  He explicitly says that the only reason
why he is considering examples in ordinary language is to serve as an
introduction:  "I wish especially to emphasize the various difficulties
which the attempts to solve this problem [defining truth] have
encountered."  (p. 154)

Then at the end of Section 1, he states his conclusion about all these
difficulties:  "If these observations are correct, then the very
possibility of a consistent use of the expression 'true sentence'
which is in harmony with the laws of logic and the spirit of everyday
language seems to be very questionable, and consequently the same
doubt attaches to the possibility of constructing a correct definition
of this expression."  (p. 165)

Then at the beginning of Section 2 (also p. 165), he says "For the
reasons given in the preceding section, I now abandon the attempt to
solve our problem for the language of everyday life and restrict
myself henceforth entirely to _formalized languages_."  [his italics]

This is the reason why I have insisted that Pat stop claiming that
Tarski is responsible for what he calls TMT.  Other mathematical
logicians have followed Tarski's lead in using model theory only for
mathematical systems.

> ... I honestly don't believe I used any depictions or other
> set-theoretic entities in relating the words I had just uttered to the
> real-world objects I perceived.

The question of how the perceptual mechanisms work is a very active
research topic that is still not well understood.  There certainly are
some processes going on in the brain that relate the image projected
on the retina to the symbolic mechanisms of language.  But I wasn't
claiming that what I was calling "depictions" necessarily correspond
to any of them.  And even if they did, it's unlikely that the
mechanisms of the correspondence would be accessible to introspection.

> This is not to say, though, that depictions are of no use at all.  On
> the contrary, as one moves up to the higher levels of abstraction as
> required for modeling cognitive processes, it does seem that something
> like depictions can play an important role.  In doing so, however, one
> is now merely expanding the formalism to include both the original
> formal language AND the depictions serving as interpretations of the
> linguistic elements---so that the relation between language and
> depictions becomes a formal model of the relation between the perceiver
> and the world (material or abstract) being perceived, to wit:
> 
>          Formal Logical System <---> Depictions/Interpretations
>                    |                             |
>                    |                             |
>                    |                             |
>                   \/                            \/
>            Human Perceiver     <--->      Things Perceived

I'm quite happy with this diagram of yours.  The top line is a
purely formal system with a formal language interpreted in terms of
a mathematical construction -- both of which can be implemented in
an AI system.  The bottom line brings in all the complexity of the
human cognitive system and the real world.  Tarski limited himself
exclusively to the top line.  If we want to extend AI systems to
deal with the relationship to the bottom line, we must begin with
a recognition that the formal logical system is not identical to
the human cognitive system, and the formal depictions are at best
similar to, but not identical to the things perceived.

John