Re: Roles, again
Don Dwiggins <dwig1@village.ios.com>
Date: Thu, 21 Sep 1995 00:23:48 -0400
From: Don Dwiggins <dwig1@village.ios.com>
Message-id: <199509210423.AAA10577@village.ios.com>
To: phayes@cs.uiuc.edu
CC: cg@cs.umn.edu, fritz@rodin.wustl.edu, guarino@ladseb.pd.cnr.it,
srkb@cs.umbc.edu, carrara@cs.umbc.edu, giaretta@ipdunidx.unipd.it
In-reply-to: <199509140037.TAA28478@eris.ai.uiuc.edu> (phayes@cs.uiuc.edu)
Subject: Re: Roles, again
Reply-to: Don Dwiggins <dwig1@village.ios.com>
Sender: owner-srkb@cs.umbc.edu
Precedence: bulk
Like Nicola, I've had trouble lately getting time to compose a coherent
reply to Pat's reply. One compensation is that others have had time to
chime in with other viewpoints.
Pat writes:
What I meant was only that Im suspicious of relations that have too many
argument places. In this case for example I'd be inclined to hypothesise
that there were things called 'sellings' which have various properties. The
problem with putting everything into arguments of a relation is that there
is no way to know how many you will need. For example, consider the
relation of sold-to-for-attime, which has 5 arguments, and
sold-to-for-attime-atplace, with 6, etc.. If these are all different
relations, then one needs axioms to establish the connections between them.
And it can get worse; my example of selling was a pretty simple one.
Consider what's involved in selling a house, or some of the complex
products in the stock market. I suspect that relations of 10 or 20
places could be found in many sufficiently large real world domains.
That's one reason I'm interested in principled methods of breaking them
down and dealing with their internal structure (or more commonly in
design, building them up incrementally so as not to have to backtrack
and start over too many times).
Actually, you've touched on one of the basic issues that a designer has
to face: where do you draw the boundaries of your domain? Since you're
not trying to capture all of reality, the task at hand provides the
guidelines (i.e., the "firm extensional ground"), but there are still
difficult cases (particularly when trying to anticipate the evolution of
requirements).
Peter Clark:
The main point: the role x in relation R is defined with respect to some
schema about what R means. (If you don't have some schema, then the
saying the relationship exists is largely a meaningless statement).
Yes, but when trying to come up with that schema, it can be useful to
isolate the roles (individually or in small groups) to get a simpler
view of the relation, as I discuss above.
Dwig:
relations wind up looking a lot like compound objects.
Pat:
Just say, looking like objects; thats what they do, indeed. But these
objects arent any more compound than any other. The 'roles' arent PART of
the relation like the wheels of my car are part of it.
Why not? They can certainly be usefully viewed that way, just as it
might be useful to view a component of a compound as a "role filler".
BTW, when I distinguish a compound object from an atomic one, it's in
the context of a particular modeling task, and taking certain binary
relations to be "compositional". Atomic objects don't have components
_in the subject domain_. What I'm doing here is a conceptual analogue
of the logical reification of relations. For example, as John points
out, there are obligatory and optional arguments to relations;
similarly, compounds may have obligatory and optional components. I
sometimes think the only difference between a relation and a compound is
how you look at it.
Dwig:
In a sorted logic, yes. But logic -- even sorted logic -- hides just
the distinctions and associations I'm trying to get at. It's like
looking at an assembly language program and trying to recover the
design.
Pat:
No, its not. Thats a common mistake. Representational languages arent like
implementation languages in this way. There isnt any notion of 'level' or
compilation, etc.., because there isnt any machine code. Is ZF set theory
like assembler? Or Carnap's axiomatisation of geometry? Or Cyc? All you can
say is that some axioms talk about more familar ideas than others, but that
depends on the reader.
...
Again, I think you are making a conceptual error. Logic isnt at any
'level', its a very, very general-purpose notation for expressing things.
I believe there is a very real notion of level that applies here. A
rough characterization: given two conceptual tools, A and B, and an
intellectual task to be accomplished, A is higher level for the purpose
than B if it better facilitates thinking in terms natural to the task,
and requires less effort in encoding the task into the terms of the
tool. Some examples may help:
- I used to tease my colleagues by stating that IBM360 assembly language
was higher level than Fortran. After getting a reaction, I'd explain
that the 360 had single instructions for some character string
operations (copying, comparing, etc.), which required more than one line
of Fortran. In fact, it was a bit more than a tease: if you had some
serious string manipulation to do, you were better off in assembler than
Fortran (of course Snobol was more at the right level... am I showing my
age?).
- While sorted predicate logic can be formally reduced to standard form,
it's easier to deal with, both for humans and theorem provers.
- Most analysis and design methods offer separate notations for dealing
with structure and behavior. It's important for designers to be able to
focus on different views, using tools appropriate to the view. People
have experimented with integrated notations that capture all aspects,
but they tend to be cluttered and awkward to work with -- lower level,
in this sense.
- Imagine that you have to express the grammar of a fairly complex
language. Would you do it directly in predicate logic, or use a grammar
notation appropriate to the purpose (CFG, attribute grammar, ...)?
- Your "tables of relation names" is a mini-notation for a specific
design purpose, to "refresh my own memory about which argument is
which".
Pat:
You might just TRY logic for doing design. I bet it will do just as well as
anything you've seen so far; and if not, I'd be very interested to know
exactly where it fails.
People do use logic for this purpose, more in Europe than the US.
Languages like Z and VDM have found serious use. However, they aren't
the languages people conceptualize in. In fact, typical presentations
of Z designs intersperse the formalism with English descriptions,
graphical notations, etc. to help people understand the logic. The
benefits of translating the design to a formal notation are that the
exercise forces you to analyze carefully what you throught you meant,
and of course the ability to reason about the resulting specification.
It's also becoming increasingly important for design of concurrently
executing systems, simply because they're damnably difficult to understand,
and almost impossible to test thoroughly enough.
Don Dwiggins "Things should be made as simple as possible,
dwig1@village.ios.com but no simpler"
-- Albert Einstein