what is translated by ontolingua?
Tom Gruber <Gruber@SUMEX-AIM.Stanford.EDU>
Full-Name: Tom Gruber
Message-id: <2939660540-3509467@KSL-Mac-69>
Sender: Tom Gruber <Gruber@KSL.Stanford.edu>
Date: Thu, 25 Feb 93 12:22:20 PST
From: Tom Gruber <Gruber@SUMEX-AIM.Stanford.EDU>
To: ontolingua@SUMEX-AIM.Stanford.EDU
Subject: what is translated by ontolingua?
A frequently asked question about ontolingua is concerned with
completeness: exactly which commitments specified in a common ontology
are made operational in the target system? The short answer is: only
some, because target systems usually don't support the full expressive
power of KIF. A medium answer is: only those commitments having to do
with class, slot, and instance relationships specified using
frame-ontology vocabulary. A more complete answer is given in the
following exchange:
> I have just finished reading your paper "A Translation Approach to
> Portable Ontology Specifications" that is going to be published in
> Knowledge Acquisition. I have a question about what you mean when
> you say that you have written a translator from a common ontology to
> some implementation language such as LOOM, Epikit, or KEE. Haven't
> you really just implemented a translator from this ontology to a
> particular use of the implementation language?
What is translated is an ontology, written in a common format, into a
knowledge base interpretable by a target system. There is no
guarantee that the resulting knowledge base will be sufficient to
perform some inference task or other computation in that language. It
may even be information losing; for instance, LOOM and KEE can't
understand equations, even though they may be stated in a portable,
declarative way in KIF. Thus, the translation is inherently
incomplete. Also, the ontolingua translator is not specific to one
ontology; it translates any ontology that consistent with the frame
ontology (with the caveat about incompleteness).
> For example, lets say I have implemented a knowledge system
> that manipulates arithmetic formulas, such as Y = A + B, in KEE. In my
> system I choose to represented these formulas using text strings and
> parse the text strings everytime I need to manipulate the formulas.
> Now, someone else could implement a different knowledge system in
> KEE that shares my ontology, but represents the formulas differently.
> They could choose to represent the formulas by building a parse tree.
> The nodes in the parse tree would be instances of KEE classes that
> represented the operators and variables in the formulas.
> In this example, two translators would have to be written, right?
> The first would translate from the common ontology to my string
> representation in KEE and the second translator would translate from
> the common ontology to the parse tree representation in KEE. So in
> fact, you have to write a translator for each use of a knowledge
> representation language, not per knowledge representation language.
> Is this right or am I confused?
Again, only some of the things you can say in KIF are sayable using
the primitives provided by these frame systems. That set of things is
represented by the frame ontology (that's the intent). Thus, if I say
(subclass-of A B) in KIF, ontolingua will translate it into something
that means the same thing (i.e., has the same deductive consequences)
in LOOM and KEE. If your ontology includes things like constraint
equations, ontolingua doesn't know how to "translate" them into LOOM
or KEE.
Instead, one could write a constraint satisfaction engine that could
use LOOM or KEE for its underlying representation of objects. This
engine might have its own language for the constraints itself, but
this is independent of KEE or LOOM. If you represented all the
equations as strings and stored them as slot values, then ontolingua
would pass them on as strings to LOOM or KEE. In this case, you're
sharing a common ontology of classes and slots, but the actual
constraint language is tool-specific.
Alternatively, the language of constraint expressions could be defined
in the common ontology. In that case, each implementation of an
application that commits to the ontology would have some way to
interpret these expressions. How an application represents the
expressions internally (strings or parse trees) is defintely NOT part
of the ontological commitment. This is the approach we are taking
with the "agent-based" engineering work at Stanford (PACT, SHADE,
SHARE, DesignWorld), where "agents" are engineering tools/services
that commit to a common ontology and exchange their data and requests
using KIF and KQML. Ontolingua does NOT translate from KIF into
specific tool formats, like CAD file formats or Mathematica
statements. Those translations are done by the agent "wrappers".
The one-ontology, multiple-agents architecture is the one proposed for
the VT experiment as well. Each participant solves the same design
problem, which is specified using a constraint vocabulary that is
defined in the common ontology. Each implementation can do whatever
it wants with the data (e.g., turning them into bit strings for
genetic algorithms if that's how their system works). For the
"terminological" information, such as subclass relations and slot
constraints, ontolingua can be used to translate the common ontology
into representation systems like LOOM. But the constraint expressions
themselves will need to be parsed by a tool-specific wrapper. Perhaps
a generic constraint translator could emerge from this exercise, but
that is not what ontolingua does.
tom