KIF version 2 review from Gruber

Richard Fikes <pwtc!fikes@labrea.stanford.edu>
Date: Fri, 8 Jun 90 09:31 PDT
From: Richard Fikes <pwtc!fikes@labrea.stanford.edu>
Subject: KIF version 2 review from Gruber
To: interlingua@venera.isi.edu
Cc: pwtc!Fikes@labrea.stanford.edu, Gruber@sumex-aim.stanford.edu
Included-Msgs: <2853789930-894085@KSL-Mac-69>,
               The message of 7 Jun 90 16:25 PDT from labrea!Gruber@sumex-aim.stanford.edu,
               The message of 7 Jun 90 16:25 PDT from Tom Gruber
Message-id: <19900608163137.7.FIKES@PESO>
FYI --

    Date: Thu, 7 Jun 90 16:25 PDT
    From: Tom Gruber <labrea!Gruber@sumex-aim.stanford.edu>
    To: Michael Genesereth <mrg@sunburn.stanford.edu>, Narinder Singh <singh@hudson.stanford.edu>
    cc: Richard Fikes <fikes@tc.pw.com>, Ramanathan V. Guha <guha@sumex-aim.stanford.edu>
    Subject: KIF version 2 review

    Here are some comments on the KIF version 2 document dated June 1990.
    Overall it reads very well, much more scholarly than most standard
    documents and reference manuals.  I especially liked including a
    discussion on conceptualization up front.

    p5:  What about packages and case?  Do shared KBs come with a Common
    Lisp prolog that does a bunch of make-package's and hacks on *readtable*?

    p22: bottom of page: "that begins with {\em list} refers to..."
    Shouldn't it be {\tt list}?

    p24:  Need the example to be about possibility.  Necessity doesn't
    clarify it for me, and most people who use modals -- whether they know
    they are or not -- are using it for some flavor of possible worlds.

    What is the universe of discourse for a shared KB?  Can one pick up a KB
    in declarative form and determine the space of objects it might refer
    to?

    If somebody writes metalevel statements that are universally quantified,
    over what space do they range (deduced facts included)?

    p24: typo: "equal TO 1+1"

    p25:  This needs more elaboration.  After being very clear about the
    semantics of the language, the document hand waves on terms.
    I think I understand why, but let's try to do a good job on this.
    "relate a concept to any sentence that define it" isn't clear to me.
    Might as well say that definitions are special kinds of meta information
    that enable efficient inference on a restricted class of statements.
    I would include in the discussion that the clauses in a definition can
    also be thought of as *constraints* (on well-formedness, or whatever).

    Can these definitional expressions be interpreted?  In other words, can
    some metalevel program generate a defrelation at runtime or does it
    accomplish this task directly with the "defines" relation?

    "defines" isn't in the appendix yet.

    p25: typos:  "the second argument of a list" -> "is a list".
	    "letalevel information"
	    "teh facts about"
    p26: "primtive"

    p28: "the probability that the prior probability of"

    p28: "a two argument version".  Can a function take a optional
    arguments? If so, what are the semantics?  It seems to me they are two
    functions, or there is an implicit default value for the missing second
    term.

    p28: I don't thing that's Bayes' Rule.  I thought it involves condition
    probabilities and a summation.

    What about skolemization?

    p30: "In the context of Lisp, actions are individual Lisp subroutine
    calls and percepts are the results..."  More precisely, in keeping with
    the previous paragraph, it might be said "actions are destructive memory
    operations and percepts are memory fetches" - maybe make the
    shared memory distinction with global variables.

    p31: "Theory" crept in there around page 31.  It needs to be a formal
    concept, since it is an argument to the function called "interpret".

    Sections 16 and 17 don't seem like what one would expect in a standards
    document.  However, they do motivate the meta level.  Maybe there ought
    to be some signage earlier in the paper, possibly reflected in a
    different section organization, that says "and in this part we'll talk
    about all the things one might want to do with QUOTE and PROVABLE.
    These things are not part of the standard, but..."   
    Should they be?

    Are there alternate paradigms for using KIF to share knowledge AND
    programs?  For instance, what would it be like to just use the KIF
    standard to talk to databases that don't do any inference?  Are there
    conditions on soundness in this case?  Or what about using KIF to build
    a portable version of something like Forbus' QP theory.  Imagine that
    Forbus provides a big KB full of process models and abstract views of
    objects and the like.  In there are some statements that manipulate
    expressions in an algebra, with operators outside of the KIF basis set.
    Can the meta level machinery be used to help make the interpretation of
    the algebra shareable?

    THE APPENDICES

    "denotation" - why return 0, why not NIL?
    "cond" - ...otherwise return "false"
    "and" - might want to add that one can't count on order of evaluation.
    "defunction" - PLEASE, let's not continue with this nerdly habit of
	    eliminating doubled consonants.  We get more than 6 chars these
	    days.
    "exists" and "forall" - was there a reason discussed why these don't
	    take scoping arguments as in CycL?  
	    (e.g., (forall x (allInstances Someclass) ...)
	    Wouldn't it be nicer to have a relation that was true of all
	    objects and force the user to say that, yes, I really mean to do a
	    global search.  (Cyc uses (allInstances #%Thing) which I
	    wouldn't recommend.)
    "cut" - uh oh.  Doesn't look like knowledge interchange format anymore.
	    This needs more of an explanation in the text.

    What about variable scoping rules?

    Where is "entails" defined?  It looks like it might be part of the
    standard.  If it is not, the example should say so.  

    Same for "in" and "support".

    "Owns is the inverse o (typo) owned-by."
                                                                                                
    VOTES

    I prefer '?'.  '$' reminds me of assembly language.  I write formatters
    to make it look good on the page.  (Mike, you might be interested to
    note that Jim Rice has written a printer/reader interface to CycL that
    puts things in infix notation and turns things like #%LogImplication
    into nice symbols.  This is rather trivial.  I gave him your book to use
    as an example.)

    setof - I agree that the nonmonotonic version makes more sense
    computationally.  However, until we see the nonmon part of the proposal
    it's hard to know what this costs.

    restricted quantification - I want it for efficiency and human
    comprehension. See the note above under "exists".

    STUFF NOT IN THERE

    What about macro processing?  Should we just agree that we can ship
    files full of lisp macros and their definitions that, when evaluated,
    produce pure kif?  One reason to include a macro processing convention
    in the standard is to allow people to write portable programs that
    analyze KIF KBs, like cross referencers for Common Lisp.  These are real
    useful for ensuring KIFdom, if nothing else.  One could deliver a KIF
    verifier in the public domain, as one of the sharing tools.  (I wrote
    such a thing when CL was just finished.  It helped flush out a lot of
    nonportable stuff in the DEC and TI implementations; DEC published it
    with their product.)

    Also, there needs to be some mechanism for distinguishing meta level
    statements, so that people not sharing identical interpreters have some
    chance at figuring them out.  If they are just thrown in there with the
    domain models, I'm not sure whether fancy KBs can be shared.

    What about the order of sentences?  It seems to matter in the EpiKIT
    implementation of KIF.