No "natural" ontologies
sowa@watson.ibm.com
Message-id: <199204191713.AA09549@venera.isi.edu>
Date: Sun, 19 Apr 92 13:10:15 EDT
From: sowa@watson.ibm.com
To: SRKB@ISI.EDU, CG@CS.UMN.EDU
Subject: No "natural" ontologies
Brian Smith sent a note to Steven Harnad's symbol grounding mailing list.
In it, he mentioned the problem that you can't assume that there is a
fixed or "natural" type hierarchy in nature. That is one of the central
issues that any system of ontology must address. I sent the following
response and elaboration of Brian Smith's comments:
> I have come to believe, however, that far and away the most important
> [distinction] is whether people assume that the TYPE STRUCTURE of the
> world can be taken as explanatorily and unproblematically given, or
> whether it is something that a theory of cognition/computation
> /intentionality/etc. must explain. If you believe that the physical
> characterisation of a system is given (as many writers seem to do), or
> that the token characterisation is given (as Haugeland would lead us to
> believe), or that the set of states is given (as Chalmers seems to), or
> that the world is parsed in advance (as set theory & situation theory
> both assume), then many of the foundational questions don't seem to be
> all that problematic.
This is a fundamental issue that creates difficulties for any theory that
assumes a fixed relationship between words and things. That includes
Frege's notions of Sinn und Bedeutung, Wittgenstein's early philosophy
as stated in the Tractatus, and model theoretic interpretations of
natural language ranging from Montague to situation semantics.
People who formulate clean-cut theories of denotation prefer examples
of detachable things like tables and chairs. I prefer to consider the
Russian word "ruka", which is usually listed as the equivalent of the
English "hand". But "ruka" includes the wrist and forearm, which are
not within the scope of the word "hand". Does that mean that English
speakers have more body parts than Russian speakers?
A related issue appears in Lao Tzu's _Book of the Tao_:
The Tao is the mother of heaven and earth,
Names are the mother of all things.
As I interpret this passage, I believe it means that nature has no
intrinsic or "natural" type structure. Instead, the differentiation
into individuals and types is imposed upon the world by our words
and the conceptual system associated with them.
> Some of us, however, worry a whole lot about where these type
> structures come from. There is good reason to worry: it is obvious,
> once you look at it, that the answers to all the interesting questions
> come out different, if you assume different typing. So consider the
> discussions of physical implementation. Whether there is a mapping of
> physical states onto FSA states depends on what you take the physical
> and FSA states to be. Not only that, sometimes there seems to be no
> good reason to choose between different typings. I once tried to
> develop a theory of representation, for example, but it had the
> unfortunate property that the question of whether maps were isomorphic
> representations of territory depended on whether I took the points on
> the maps to be objects, and the lines to be relations between them, or
> took the lines to be objects and the points to be relations (i.e.,
> intersections) between *them*. I abandoned the whole project, because
> it was clear that something very profound was wrong: my analysis
> depended far too much on my own, inevitably somewhat arbitrary,
> theoretic decisions. I, the theorist, was implicitly, and more or less
> unwittingly, *imposing* the structure of the solution to my problem
> onto the subject matter beforehand.
This is the essential problem that led Wittgenstein to abandon his
earlier, model-theoretic approach. His later theory of language games
permits words to have one denotation for one purpose (or language game),
and a very different denotation for another purpose.
> Since then, I have come to believe that explaining the rise of ontology
> (objects, properties, relations, types, etc.) is part and parcel of
> giving an adequate theory of cognition.
In _Conceptual Structures_, I tried to reconcile a Wittgensteinian
position of language games with a computational approach that depends
on formal symbol manipulation. I had to abandon the traditional
model-theoretic approach, which assumes a fixed mapping between words
and things:
names <--> individuals
common nouns <--> sets of individuals
etc.
In the model-theoretic literature, logicians often talk as if their
models consisted of sets and relations in the world. But when they
actually do logic, their models turn out to be abstract data structures.
Tarski was not guilty of that confusion, since his original paper was
titled "On the concept of truth in *formalized* languages". But Carnap,
Montague, and others made what I believe to be an illegitimate
extension of Tarski's approach -- they identified their abstract models
with the real world.
Instead of identifying the models of model theory with the real world,
I prefer to identify them with "mental models" or with unabashedly
abstract data structures in the computer. That gives us a two-stage
mapping from language to models to the world:
Language <--> Models <--> World
The mapping from language to models can be done with a model-theoretic
approach in the spirit of Tarski, Carnap, and Montague. But we then
have a separate mapping to consider, which depends on how well the model
approximates the world. I discussed those mappings in _Conceptual
Structures_, p. 20:
The relationship between language and the world is indirect:
a sentence must be interpreted in terms of a conceptual model,
and rules of perception must relate that model to a situation.
Errors may arise either in mapping language to the model... or
in mapping the model to the world. Whether a sentence is true
or false depends on the criteria for interpreting the sentence
in terms of a model and for applying the model to the real world.
The sentence "This steak weighs 12 ounces" may be true in terms
of a model where the standard of weight is a butcher scale, but
it is probably false in terms of a precision balance.
With this approach, you can play different language games with the same
words. For each game, there is a fixed mental model with a particular
way of mapping the model to the world. But different games may have
very different models, even though they may use the same words (cf.
Brian Smith's example where he had two radically different models
depending on whether he considered points or lines to be "objects").
For more on this issue, see _Conceptual Structures_ and my more
recent papers on "Knowledge Soup".
John Sowa
________________________________________________________________________
References:
J. F. Sowa, _Conceptual Structures: Information Processing in Mind
and Machine_, Addison-Wesley, Reading, MA, 1984.
J. F. Sowa, "Crystallizing theories out of knowledge soup," in Z. W. Ras
& M. Zemankova, eds., _Intelligent Systems: State of the Art and
Future Directions_, Ellis Horwood Ltd., London, 1990, pp. 456-487.
J. F. Sowa, "Finding structure in knowledge soup," _Proceedings of Info
Japan '90_, Information Processing Society of Japan, Tokyo, Nov. 1990,
vol. 2, pp. 245-252.