Higher Order KIF & Conceptual Graphs
fritz@rodin.wustl.edu (Fritz Lehmann)
Date: Sat, 18 Dec 93 08:06:16 CST
From: fritz@rodin.wustl.edu (Fritz Lehmann)
Message-id: <9312181406.AA12190@rodin.wustl.edu>
To: cg@cs.umn.edu, interlingua@ISI.EDU
Subject: Higher Order KIF & Conceptual Graphs
Dear Norman, To: norman@karl.cs.su.OZ.AU (Norman Foo)
You wrote:
>Fritz and John
>Formally, I think Fritz is right about the necessity of higher order
>concepts in semantics. In a 1st-order logic, compactness forces on us the
>results that finiteness, standardness for numbers, etc. are not expressible.
>If one tries to circumvent these by inventing predicates for them, these new
>predicates will in their turn have non-standard interpretations.
Pat Hayes said: well, that's the price you have to pay! (For
completeness, that is.) The notion of non-standard numbers lying
"outside" the real number line is so peculiar to most people, and to
most potential users of a Knowledge Interchage language, that I think
users who are aware of the issue will simply refuse to go along with a
merely First-Order semantics. Worse, the other users will ignore it,
only to have nonstandard numbers pop up unexpectedly during attempted
mathematical inferences. I Imagine their baffled expressions: "HUH?..."
Certain logicians are aesthetically attracted to completeness,
compactness, Lowenheim-Skolem, and 0-1 properties, so they are willing
to pay a high price for them. The problem is that these properties are
not necessarily good. For practical knowledge interchange, they are
bad. At least, no-one has come forward in this debate with any cogent
arguments in their favor (for knowledge interchange purposes,as opposed to
Hayes' vision of complete cognitive grounding). At some very deep
philosophical level the first-order world-view could conceivably be the true
picture (one without true identity for example), but for practical
purposes it is a bizarre one. You make a good case against it here.
>On the other hand, for computation, there is no need to go beyond
>definable properties -- as long as ontology revision or creation is not
>part of the game. I will say a little about the latter in the sequel, but
>let me try to justify the former informally.
Even without explicit, dynamic revision of the ontology, many
(most) static ontologies will be at least partly approximative; that is,
there is a static underlying _assumption_ that some knowledge _could_ be
further refined with more information (one kind of revision). Zdzislaw
Pawlak's Rough Sets, Jerry Hobbs' granularity theory, Antti Hautamaki's
Point-of-View Logic, Dana Scott's approximation lattices, and common-
sense predicate systems for the real world are examples. Such theories
do need to allow for undefinable properties, particularly in defining
identity of individuals --- even properties like Pat Hayes' example
VERY-BIG. I may be unable to define VERY-BIG, but if Tweedledum is just
like Tweedledee, except one might be VERY-BIG, I don't want to have to
identify them as one individual. (Parker-Rhodes' theory of pure
indistinguishables is a striking consequence of assuming the opposite --
omniscience -- in the sub-physical world.)
>First, there is a well-known
>encoding of Turing Machine computations into 1st-order logic by Buchi.
I haven't read Buchi. In the secondary Model Theory literature,
however, I've seen statements to the effect that Buchi automata come
in many species, corresponding to monadic second-order formulae (i.e
with no dyadic or higher-valence higher-order relations), and that to
distinguish one class of Buchi automata from an inequivalent one
requires strongly higher-order logic. Since the conventional wisdom
is that First-Order logic is (semi)decidable and that Turing Machine
computations are not, your way of describing Buchi's result calls for
a bit of elaboration.
>Second, if logic programming is the paradigm, then least fixed points
>are precisely the terminating positive computations, and the closure
>ordinal for this (it is Tarski's result) is omega. The possible
>anomalies are the greatest fixed points (corresponding to finite
>failure) that have closure ordinals greater than omega. But for
>these, my dear friend Joxan Jaffar and his ex-student Peter Stuckey
>have shown that there exist (conservatively) equivalent programs with
>greatest fixed points having closure ordinal omega. I think these are
>sufficient to convince most people that 1st-order properties are
>sufficient for computation. So, here I support John.
Declarative semantics for programs (e.g. Scott-Strachey-type
denotational semantics et al.) was one of the applications I
listed which reportedly require strongly higher-order logic to
distinguish among defined classes. I don't know what
"conservatively" means in your usage here. If it's not the same
as the "conservative" you used above, it would raise the red flag
of suspicion.
>Now, a few words about ontology. One of the things that has occupied
>my attention in the past year is the migration of ideas on belief
>revision to the stronger domain of revision taking into account the
>coherence of theories, and how abductive revision can be axiomatised.
>In the longer term, my research group wants to consider problems of
>ontology revision as well. For the latter, it is of no use in
>principle to consider as "new" any predicate P that can be defined
>in terms of existing ones, for then the language L + {P} is reducible
>to L, and the extension is conservative.
Well, yes, this is true "in principle". The definition
structure of the semantic lexicon may not affect the model theory
but, practically speaking, it's a crucial part of an "ontology"; it
has its own mathematical (higher-order poset) structure and
independent interest. Logical but random definitions are of no
interest. Buildings are made of bricks, but there's more to
buildings than brick types --- there's architecture. For this
reason strict property inheritance, for example, is interesting and
important even though, as Peter Patel-Schneider has emphasized, it
is superfluous to model-theoretic semantics (it's "conservative").
Foundational model theory is preoccupied with bricks' capabilities.
Despite my having started this KIF/CG HOL/FOL thread, I'd rather talk about
architecture than about bricks, and kinds of theories rather than kinds
of logics.
[I feel that my own work on "skeleton product" and "fret
product" of posets related by graphs demonstrates the existence of a
"structural dimension" which is truly ontological, although strictly a
priori. The relational structure (directed hypergraph) of a
description is an ontological ingredient of the description along with
the constituent concepts and relations and their hierarchies, and has
a somewhat similar algebraic effect on the poset of all descriptions
ordered by subsumption. It's as though a basic pattern of bricks
amounts to extra kind of brick.]
> True ontology revision is
>not conservative, possibly not even monotonic. We have been playing
>around with "dynamic" generalizations of Beth-type theorems to
>establish situations in which such revisions are provably necessary.
>OK, what's the deal here? Such processes may require higher-order
>semantics to argue for even soundness.
O.K. Soundness is good, and worth a high price. Revision
may be nonmonotonic but that needn't make logic so, just the
"facts" about fallible beliefs. But I surmise, from your
statement, that nonconservative monotonic refinement is reason
enough for your interesting-sounding conclusion, without bringing
in nonmonotonic revision logics.
>There is no sense in talking
>about completeness unless we are prepared to adopt the ideas that
>Erik Sandewall has been promoting about ontology descriptions (these
>are done outside the object language of course). Moving from one
>model class to another is the essence of ontology revision.
I don't know what Sandewall is saying these days about ontology
descriptions; I'd like to know.
>I hope I haven't injected irrelevancies into your discussion!
>Regards.
>Norman
Hardly.
Yours truly, Fritz Lehmann
fritz@rodin.wustl.edu
4282 Sandburg, Irvine, CA 92715 USA 714-733-0566
====================================================