Standardizing FOL

sowa <sowa@turing.pacss.binghamton.edu>
Reply-To: cg@cs.umn.edu
Date: Tue, 21 Sep 93 08:26:50 EDT
From: sowa <sowa@turing.pacss.binghamton.edu>
Message-id: <9309211226.AA10294@turing.pacss.binghamton.edu>
To: cc:%turing.pacss.binghamton.edu@cs.umn.edu, cg@cs.umn.edu,
        interlingua@isi.edu, phayes@cs.uiuc.edu
Subject: Standardizing FOL
Pat,

When I talk about standardizing FOL, I mean the KIF and CG coaltion,
which supports metalanguage and context mechanisms that go far beyond
plain vanilla first-order logic.  But at the same time KIF & CGs go
beyond it in a way that allows you to define any kind of ontology or
system of inference that you prefer.  You can define varieties of
temporal logic, modal logic, intensional logic, or even fuzzy logic.

I didn't have a chance to catch up with my email last week, so here
are some comments on your note of September 14th:

> ... And I don't think we need 'full-blown' 
> higher-order logic. But consider the possibility of a second-order 
> logic which is full-blown in this sense but restricted in other ways;
> a linear second-order logic, say; or a second-order logic whose
> quantification over predicates is restricted by some kind of context 
> mechanism but otherwise full-blown (full-blown in a box, it might 
> be called airbag logic). I'm making these up as I go along, but my 
> point was that we need to be experimenting, not standardising.

Yes, that is what we are doing with KIF and CGs.  What they provide
is a hierarchy of metalevels.  Start with a conventional domain of
discourse D -- say a typical first-order model -- and a first-order
language L0 that talks about individuals in D.  (You can assume your
favorite syntax for L,  since we are standardizing the semantics
and providing two starting syntaxes -- KIF and CGs -- but you can
define your favorite linear, graphic, or whatever syntax you prefer.)

Then we have our next language L1, which is also first-order.  Its
syntax is identical to L0, except for a little more vocabulary that
lets us talk about the language L0, the domain D, and the relationship
between L0 and D.  In L1, we can extend L0 by giving new definitions.
We can even define what it means for a statement in L0 to be "true"
-- i.e. we can define a Tarski-style model theory or we can define
a Hintikka-style language game for L0.  We can also talk about the
statements in L0 and we can divide them into different groups, say
a bunch L that we will call "laws" (that L should be curly to distinguish
it from the language L) and another bunch that we call "contingent" facts.
Then we define necessarily true by provable from the laws and possible
by consistent with the laws.

Then our next language L2 is also first-order.  Its syntax is
identical to L0 and L1, and it doesn't need much new vocabulary,
since we already had to put the metatalk into L1.  But its domain
of discourse is the union of D and L0 and L1.  In L1, we can say
such things as "Proposition p has certainty 0.92," where p is a
statement in L0.  Then in L2, we can define rules of fuzzy inference
that specify how the certainty factors in L1 about statements in L0
are to be combined to derive new certainty factors for the results.

Then in L3, we can say "Remember all that junk we defined in L2?
Well, forget it!"  Then we take all those languages L4, L5, L6, ...,
and scrunch them into one big language called L* -- and that is
what you get with KIF and CGs.

As far as airbags, we got 'em.  The context mechanism in CGs draws
boxes around collections of propositions (theories, if you like).
Then outside the boxes, you can talk about what's inside, what it
means, how it relates to some other box, etc.  The CG boxes map
to the KIF quotes, and thence to any other language that maps from KIF.
If your other language isn't as expressive as KIF and CGs, then you
say things in KIF or CGs that limit what you can say in certain boxes.
Then you send those impoverished little boxes to the impoverished little
communities that speak those little languages.

> I guess my most recurrent worry is this. One of the reasons we feel 
> 'safe' with FOL is that while acknowledging that we will need extensions 
> to it, we all feel that they will indeed be extensions, ie they can 
> hardly be weaker than FOL. (After all, what would you take out? 'NOT', 
> say? Nah...) But maybe this intuition is mistaken. Maybe in order to 
> get the comfort back we will need to sacrifice some expressive power 
> of FOL for something else, and do so in a systematic way which forces 
> us to alter the semantic base significantly. 

Yes.  But you can use the metalanguage to limit the expressive power
of what you can say in certain boxes.

> This is related to a worry I expressed in earlier correspondence. We 
> seem to need ways of describing sets of sentences which are different 
> from any of the usual semantic frameworks, because these always give 
> us (sub)languages, ie are always closed under some kind of inference. 
> What could these be? If we find them, perhaps they will be more like 
> what we need for the semantics of a representational language. Ie, 
> perhaps thinking of assertional logics (of any kind) as the appropriate 
> vehicles for representing thought may be misleading us. This has often 
> been expressed before, but the replacements offered have usually turned 
> out to be just FOL in disguise. I don't have any replacements, 
> just a feeling that we may need a major conceptual leap, and a concern 
> that talk of standardisation in Krep is more likely to smother than 
> encourage the strength of imagination required.  If we feel too 
> comfortable we might doze off :-)

What we are trying to do with KIF and CGs is to stop doing the useless
reinventing of the wheel and to start doing some productive kinds of
experimentation.  Once we have a standard defined, then we can start
building tools that people can use to start the real fun of experimenting
on the definition of new theories without the dog work of implementing
a new parser, theorem prover, graphics display, debugging package, etc.
The real work is in defining new ontologies, not in working two years
to build the tools before you can even begin.  And the tools require
standards.

John Sowa