26 Aug 1997 ............... Length about 5900 words (38000 bytes).
This is a WWW version of a document. You may copy it.
Fetch a postscript version to print.
Temporal and situated aspects of data
Data as held by standard data management software may be richly
connected in a variety of ways, and yet the relationship of the stored data to
the world it is supposed to model is assumed to be very simple. There are
aspects of each stored instance value, such as its individual interactions with
time and context, which require a more sophisticated treatment of that
relationship. To represent time in a database is currently done by extending
the schema to include time values; yet a person's salary and its increment date
have a temporal behaviour that are attributes of the salary value and not of
the person entity. Furthermore, it is often useful to record the time of
events happening to data values themselves (not to anything in the world being
modelled) e.g. a data update event.
This research proposes to tackle many such issues systematically, thus avoiding
the indirect and hence troublesome modelling otherwise necessary. Furthermore
we have found an apparently very simple domain -- basic accountancy -- which
exhibits them in profusion, and which we propose to use to develop and test
solutions. Currently the most common tool for accountancy is a standard
spreadsheet. A spreadsheet is a data management tool which has a simplified
data model, but otherwise exhibits the same oversimplification as do more
complex applications, e.g. database systems. We plan to extend the data model
of spreadsheets to cope with the problems mentioned above in the expectation
that similar extensions will be applicable to the more complex data models
underlying database systems. The user interface aspects of spreadsheets
similarly turn out both to need re-analysis and to illuminate fundamental
issues in HCI. The resulting improvement is important partly to maintain
usability and partly to exploit the gains from the new approaches.
Although computer spreadsheet applications are explicitly derived from
the paper practices of accountants, when analysed they are quickly seen to be a
rather poor fit to the task of even very basic accountancy. They support
arithmetic, but not most of the tasks that accountants are using arithmetic to
accomplish. Calculating and reporting totals and other figures is only part of
the job. Accounts must also show what they represent, how the figures are
derived, what assumptions they are based on, and at what time the assumptions
and figures were made. This is important so that accountants can check their
own workings, but is equally important in presenting accounts to others even
though less detail may be appropriate then. It is well known that spreadsheets
turned out to be used for financial planning as much as for accounts
-- for reasoning backwards from totals as much as for calculating totals
from known items -- yet this requirement has not been well reflected in
spreadsheet programs. Even though the relationship between totals and
components is the same in both cases, in a spreadsheet you have to rewrite the
formula completely and move it to a different place, so the connection is
represented only in the user's head. We believe that the reason that
spreadsheet design has stabilised, but in forms that are a poor match to
accountants' tasks, is that substantial progress depends on developing and
applying solutions to the relatively profound data modelling problems referred
to earlier, and discussed below.
We believe that our opening claim -- that the relationship of data values to
the world they are meant to model requires a more sophisticated treatment --
applies to many of the issues that litter database practice: that have been
noted and discussed piecemeal but not systematically dealt with, including:
- Calendar time: functions from durations to e.g. money
- Time sequence constraints
- Recording data entry times
- Data acquisition failures (data faulty or missing)
- Presupposition failures: where the world entity has a state not describable
by the data schema
- Recording justifications for values
These could be viewed as related by a single underlying issue: the trend
towards analysing computer programs in the context of the human and social
systems in which they are used -- towards dealing with "situated computation".
A database designer not only has to design a data model to describe the
relevant parts of the external world, but also to take account of the common
retrievals (e.g. to provide indexes for fast retrieval), and the method for
data entry and how to approach the problem of validation. Issues about null
values are issues about how to handle errors in this wider system which
includes human actions of data entry.
What is proposed here has three major aspects which we believe will be
much better tackled in conjunction, as interleaved aspects of a single research
activity, than any would if studied alone. The aspects are (1) theoretical
problems in data modelling and programming (e.g. temporal aspects of data,
supporting reasoning about calculations), (2) a simple and limited domain
-- basic accountancy -- that is nevertheless of great practical importance
so that we can quickly build a working program and so test the adequacy our
ideas against real users and their concerns, and (3) developing a novel user
interface of high usability partly because the problem domain demands it, and
partly because the domain also brings out some important and under-explored
issues in user centred system design. This research strategy has the weakness
that the theoretical solutions developed will not be demonstrated to be general
within this grant (though we expect to generate arguments supporting their
claim to generality); but it has the strength that the solutions' adequacy for
one real application will be tested on users. We believe that this is
appropriate, especially for relatively early work. In contrast, many
demonstrator systems only demonstrate the possibility of construction, and do
not test the performance and usefulness of the design in (any) practice. We
plan to gain real (perhaps harsh) feedback on our ideas, at the price of
leaving to the future some questions about generality and scaling up. However
the maturity of the field of data modelling as a whole gives us a much better
chance of filtering ideas for probable generality than we would have in a newer
field. Our research strategy, then, is in effect one of rapid prototyping of
solutions in one domain to some theoretical problems which we believe to be
very general.
- Gathering requirements and other task information from interviews
with domain-expert users e.g. accountants and others without computing
backgrounds, but whose job involves financial management. (We have verbal
expressions of willingness from several people, and a letter from the director
of a small company, currently with 3 employees, agreeing to be interviewed, to
try out and to critique our software, and to use it as far as possible in
managing the company's accounts should it prove sufficiently usable.) 1
person-month, including writing up accounts of this.
- Developing the theoretical ideas outlined below, and applying them to the
domain. 6 person-months.
- We will build an application program on the Macintosh for basic accountancy.
To users it will appear as a specialised development of spreadsheet programs.
Technically however it will have far fewer mathematical functions, but far more
complex data objects, together with extensive use (and multiple visual
representations of) the graph structure implied by the formulas and the
relationships they specify between items (cells). 3 person-months for design;
12 person-months for implementation and in-house testing, of which perhaps 8
for the first version, allowing 4 for later versions after (d) has begun.
- Test the program with domain users, and iterate the design as necessary. 1
person-month interacting with the users, gathering their feedback.
- Writing documentation and reports. 1 person-month.
The simplest view of what databases do is to provide a replica of the
world, albeit partial and abstracted. This is what "model" means to many
people. The ideal case is when the model is entirely accurate in those aspects
it describes: like an extension of direct perception, or remote sensing by
instruments. In many cases considerable effort is expended on the human
systems in which databases are embedded to ensure that data is complete,
accurate and up to date, at least by the time it is consulted. Because of this
effort, the symptoms resulting from deviations from this ideal are usually
regarded as minor problems of advanced database practice. We, on the contrary,
attempt to view these as a systematic and related cluster of theoretical
shortcomings. We begin by discussing developments to modelling the world, and
then go on to issues of modelling states of the recorded data itself.
Some objects to be modelled have intrinsic temporal behaviour such as a
person's age. If this is stored as a length of time, it will need continual
updating, while if birth date is stored then age can be exactly calculated for
any time. The accountancy domain has a large number of such cases. Salary
costs, rentals, insurance payments etc. all have regular though more complex
temporal behaviour. A first pass at a design for these items might use amount,
recurrence frequency (e.g. monthly), and number (not all such payments recur
for ever e.g. hire purchase agreements). The first and last payments are often
a different amount and need to be describable. The next consideration is to
describe any known rules for changes e.g. annual salary increments or national
pay awards. In general, an item needs to be treated not as a single scalar
value but as a function from duration to a value. Functions of time are
particularly important because of the interaction with issues of data status:
whether the data is up to date.
` If all values (e.g. money amounts) are a function of time duration, then
producing accounts for any arbitrary sub-period becomes little more than a
simple retrieval. At present similar calculations have to be set up by every
user without support: typically time spans (e.g. in months) are calculated in
the head and typed into an overt calculation of the money amounts (together
with a proportion of errors). Obviously it is faster, more flexible, and more
likely to be accurate if a general rule using a complete description of the
real data (the behaviour of the world) is used. A similar point applies to
examples such as salaries: if all items have this kind of information, then we
avoid the problems of their treatment on ordinary spreadsheets where you must
allocate extra cells per item, thus cluttering the spreadsheet. (This is
really the familar point that when there is a repetitive sequence of tasks,
these should be supported in a constrained way.)
A different kind of time behaviour depends not on calendar
(quantitative) time, but on sequence. For instance a dental surgery might have
a regular sequence of sending a reminder to a patient, making a checkup
appointment, performing the checkup, arranging and performing a session with
the hygenist, receiving payment. Similar sequences are common in processing
students, and in selling goods and services of all kinds: it thus also strongly
overlaps with accountancy. The point is that the same entity goes through a
partially or fully ordered set of states. The data model should reflect the
continuity of identity of the record, and check data entry transactions for
conformity with the sequence. What is required is support for an attribute
type of this kind, representing a sequence (or in general perhaps a directed
graph) of alternative states. The particular graph must be designed
specifically for each domain, but the general behaviour and its interaction
with data entry events is standard. There may of course also be an interaction
between this sequence behaviour and calendar time e.g. to generate reminders,
or warning invoices if payment is not received after some fixed interval.
The developments discussed above extend a data model's basic ability to
describe aspects of the world (in this case aspects of its temporal behaviour).
At times, however, there are breakdowns of various kinds between what the data
model is designed to describe and the information actually present. Such
information about the state and status of the recorded information is often
stored only in human heads (e.g. "don't look at that, it's out of date", "I
haven't put in today's figures yet", or "I know it gives an address, but
letters there get returned 'addressee unknown'"). It seems desirable to extend
computer support to representing and recording this information-status
information. There are several major kinds.
If all data entry and update actions are recorded as events, then the
data will itself always hold complete information about when it was last
updated, and by implication whether it is now up to date. In fact if data
actions are recorded as time-
stamped events, then not only can the normal data view (the current resulting
state of the data model) always be computed but all past states for any
specific time can be recalculated. This is useful for satisfying legal
requirements for record keeping, making the database not only a record of the
current state of affairs but also a complete record of past business, and as a
resource for correcting errors in data entry detected much later. The latter
can be a serious problem with databases, many of which pin all their faith on
validation at entry, leading to disasters if a few errors nevertheless creep
past. This single mechanism can also provide a version mechanism if the user
may define "super-events" as sets of atomic events. For instance in the basic
accountancy domain, a typical use is to go back to a spreadsheet whose layouts
and formulae are established, and type in a set of new values: treating this
set as a super-event allows the resulting state to be treated as a new
"version", while, if the data entry events were systematically stored, the old
versions would still be available. This mechanism would have no costs for the
user: they would still perform the same input actions, while the fact that new
events were being added to the store rather than data values being overwritten
would be invisible. Although this could cause space problems for very large
databases, in many small applications this would not be a challenge for typical
configurations. Thus adding temporal aspects to information status recording
should bring considerable benefits directly.
Designing a data model for an application amounts to deciding what data
should be captured and stored. Nevertheless in practice there are occasions
where these data values are not acquired according to plan e.g. a customer
fails to fill in a form, a keyboard operator cannot read a document or form.
If the entity is not to be omitted entirely from the database, then the
deficient status of this data instance needs to be represented. This is often
attempted by means of special null values e.g. "John Doe" for names, perhaps a
negative integer for unknown ages. Conceptually this obscures two key points.
Firstly, the real need is to record two things about each value: the value
itself in its proper range, plus a separate representation of the status of the
data value e.g. valid, invalid, not yet acquired, etc. Secondly, the set of
information states and especially distinct failure types is domain specific: it
does not depend on the type of the data value e.g. integer, or ages, and so
there can never be a clean and general type system embracing null values. It
is not a question of a boolean flag (valid or invalid), but usually of
recording in what way the system of data acquisition failed. For instance if
you are recording the exam marks of students, it is important to the
institution to distinguish between marking the answers and giving zero,
receiving a blank answer sheet, knowing the student attended the exam but did
not hand in an answer sheet, knowing the student did not attend (check to see
if a medical certificate exists), not finding any record (did we lose the exam
script?). These differences are important to many users of record systems, not
least because each state implies different followup action, and so should be
addressed in data modelling. The distinctions worth recording are really
possible error states of the (probably human) process of acquiring the data.
As with other error messages, they will be more useful to the extent not only
that they give more information, but that that information is matched to
standard recovery actions. That is why they are application specific, but it
also implies that with care they could be designed systematically by the data
modeller, whose role is by implication extended from designing a description of
some external world, to designing a standard data acquisition process, to
designing recovery procedures for the ways in which it can fail.
In accountancy it is also important. An auditor for instance will need to note
whether records are missing, pending, or satisfactorily checked. In managing
customer accounts it is increasingly important for service industries to be
able to handle phone queries and complaints, and to correct errors and
omissions by either customer or the business as smoothly as possible. Data
models designed with only the case where everything works correctly in mind
will not be as helpful here as one that models types of data acquisition
errors. Note that sequential temporal descriptions could play a part here: for
instance any data value has an at least partially ordered set of states. An
attempt ending in failure might go: unknown and unsought, first enquiry method
tried and failed, .... tried everything and do not expect ever to find out. An
attempt ending in success, but in a system requiring careful validation (e.g.
credit status) it might be: starting an enquiry, entering a value, independent
validation, entering into the online database.
Data acquisition failures are not the only kind of problem that null
values are used to describe. The other major subclass is what might be called
presupposition failures. While a data instance describes the actual attribute
values of an entity, a data model describes the attributes and presupposes that
all instances of an entity have such a value for data acquisition processes to
seek. Few such assumptions are wholly proof against exceptions. Most records
of people assume they have a name, but a few infants escape birth registration
and so have none, and some people particularly women change their name and so
have more than one. Suchman (1983) describes a problem with addresses for a
company supplying photocopier paper. The form assumed that one address could
be used for delivering the paper and sending the invoice for payment, a
presupposition that broke down when a ship in transit sent in an order from one
port for delivery to meet them at another. Thus there may be no normal value
to record, not because the value is not known, but because there is no such
value: this is a failure (or error) in the data schema design in the sense that
it assumes something about the world that turned out not always to be true.
However the way forward is probably not to try to design schemas for all
possible worlds, but to have a way of recording exceptions.
Again, it is not enough in general to have a single "not applicable" null value
added to the range of all types to flag the failure of a data schema
presupposition, although this may be enough in many specific applications. For
instance an employee record might have a field for their PAYE code because this
is normally very important. However there may be cases where there is no
value, not just because it has not yet been discovered, but because they have
none (the tax office has not yet assigned them one), or because it is not
relevant because they have a non-remunerative position (e.g. honourary fellow
at a university).
Besides recording the time of data entry, in general it would be
desirable to record the reason or authority or source on which a data entry was
based. While some applications are in fact designed to use only one source of
information, in many cases recording justifications would greatly enhance the
usefulness of the data: in historical, literary, criminal, and military
information for instance. In accountancy this is potentially very important
too. Many calculations are projections or plans, and recording the basis of
the assumptions made will make them both more useful and more re-usable. Even
in auditing accounts (of the past), assumptions must sometimes be made, and the
basis of estimates needs to be recorded. Current systems seldom do this.
Justifications would probably be implemented as fields of data entry event
records, but their use could be not only in helping to track down and correct
data entry errors but in representing alternatives in the computer. In
accountancy, the structure of both data and calculations is common between
records of the past, current managment which uses a mixture of known figures
(e.g. overheads) and projections, and future planning which may require the
development of several alternatives. These could be stored in a single uniform
system, using justifications as well as entry events to label and distinguish
the alternative versions.
So far the discussion has been of data modelling issues that apply to
many application domains. The basic accountancy domain however obviously
involves attention not only to data structures, but also to setting up
arithmetic calculations, which are most widely supported at present by
spreadsheets. Spreadsheets are sometimes loosely described as having a direct
manipulation (DM) interface. This however is to miss a very important
difference between them and true DM interfaces. In the latter, e.g. in
graphics editors, objects are referred to by users by extensional descriptions:
by pointing. Such interfaces are organised around the idea of displaying all
the objects of interest, but having the user refer to objects by pointing to
each individually, and refer to sets by enumerating the objects belonging to it
(e.g. by dragging over a set of icons). This is in contrast to intensional
descriptions as used in language, where the user describes an object or set,
and the machine then retrieves objects matching that description (if any). DM
is by (strict) definition unable to deal with domains requiring intensional
description e.g. database retrieval, or programming. A description is not
direct reference, let alone manipulation, and an arithmetic expression for
example describes properties of the number you want, rather than naming it.
The whole point of typing an arithmetic expression or a database retrieval
command is not to get back what you see and manipulated (WYSIWYG) but to
get back what it turns out to mean (a number, a set of items) -- something
which you do not know and are using the machine to discover.
Spreadsheets are the best common interface design that approach the usability
of DM while tackling intensional descriptions. The crucial aspect of their
design seems to be that they display both intensional descriptions (the
formulae) and their effects (the resulting values); and furthermore allow
compound descriptions to be broken down as little or as much as the user wishes
i.e. you can split a formula into parts, and inspect the intermediate results
in separate cells. This is valuable whether the user is developing their
program (intensional description) iteratively, or attempting to verify its
correctness by inspection and reasoning.
Spreadsheet design has evolved very little, but this may not be because it is
optimal but because of a lack of analysis of its crucial tasks and properties.
Spreadsheets can be seen as fundamentally about supporting the creation of
simple programs to do arithmetic calculations, and that they could be improved
by a systematic attempt to improve that support. We know from the field of
programming languages, that languages need to be easily read not only by
machines but by the programmer while developing a program, and by other
programmers later. This is equally true of spreadsheet users, and accountants
in particular. People use spreadsheets not only to do calculations for private
use, but to produce documents (e.g. accounts) to hand to other people, and to
support their inspection and checking. Thus, for example, we should expect
that this aspect of their function would be helped by better support for
labels, meaningful variable names, comments i.e. justifications. Spreadsheets
often support this only by allowing text values to be typed into adjacent
cells, but when items are moved around the failure of the program to understand
the structure that is important to the users shows up. The development of
ideas for more complex data items, and particularly of justifications, should
help directly.
The other way in which ordinary spreadsheets fail to represent the structure
important to their users as well as they might is in treating formulae as
one-way functions, not as equations. In accountancy, the same relationship
holds between totals and subtotals (e.g. for staff and equipment) in accounts
of past spending and planning for the future. The difference is that in the
latter, the total is often fixed and unspent amounts under each heading need to
be calculated. What is wanted is not an algorithm for resolving circular
references, but an ability to support the user in a very rapid reversal of some
formula e.g. a popup dialogue asking which subtotal is now to contain the
residue (e.g. "Reserve fund", "borrowing requirement"). This can be done by a
simple extension of the use made of the relationship between items implicitly
specified by any formula. In so doing, it will in effect be a step towards
making such programs support a less imperative style of programming. In fact
arithmetic formulae specify implicitly but unambiguously a directed graph: in
database terms, a relationship between entities. By deriving and using this
relation more flexibly, a program can offer many useful features to its user,
one of which is that ability to reverse the direction of the relationship, and
so embody the connection between projections, financial planning, and accounts
of past periods.
As suggested in the introduction, all the above issues can be seen as
aspects of a pervasive issue in computing -- that of "situated computation"
i.e. the consequences for program design of the situation in which it is used.
This concerns not only the highly variable behaviour of human users of personal
computer applications, nor the more carefully designed human procedures
associated with data entry and validation, but in principle also the whole
environment e.g. file servers and networks. In the data modelling field, it
appears as issues of modelling information status. In the field of programming
practice, even of writing arithmetic expressions in spreadsheets, it appears as
the issue of how best to support people in writing correct expressions,
checking their validity, and being able to allow others to understand and
verify them easily. That is, the context is that of human program creation.
This research obviously does not aim to solve the whole issue of situated
computation, but to the extent that it does establish solutions in a particular
domain, we can expect them to contribute to the general topic.
The suitability of the domain of basic accountancy was sketched in the
introduction. It turns out to involve temporal aspects to a considerable
extent. Financial planning and past accounts form a spectrum with many
intermediate points. Half way through a project or fixed price contract, the
managers have some fixed figures (income and expenditure so far), some firm
predictions (fixed overheads like rental and salaries), and some much looser
predictions. As a project progresses, items move in status from guesses, to
forecasts, to actual transactions. This continuity has usually been
represented only in human heads, but the theoretical ideas discussed should
allow us to improve on that. The requirement for planning also implies a need
for a version mechanism (for supporting alternatives) and for recording
justifications and associating them with items. Accountancy, like other data
domains, also has to deal with data that is missing, incomplete, or otherwise
deficient with respect to the designer's expectations. All in all, the domain
exhibits good examples of all the issues discussed, while being characterised
by small data sets and simple (arithmetic) calculations, and so seems an ideal
research domain.
The above outlines some of the major analytic developments planned in
order to support the task domain of basic accountancy better through more
complex data objects, plus the use of the relationships between items specified
by the formulae linking them. To be useful, the user interface must be at
least as good as that of current spreadsheet programs, in spite of any increase
in conceptual complexity. A considerable part of the work will therefore be in
developing this aspect. The starting point is to analyse what spreadsheets'
real task is, how they currently do it, how those techniques could be improved,
and how both current and improved techniques could be applied to the domain of
basic accountancy.
Spreadsheets embody several techniques for displaying the implicit
structure relating items. 1) formulae: "=A+B+C" not only specifies an
arithmetic calculation, it also is a way of representing a relationship or
structure between the cells holding A,B,C and the cell holding the formula and
its resulting value. 2) Most people use spreadsheets like paper in that they
will type in items in a vertical column, with the total at the bottom. This is
using spatial position, grouping, layout to represent a relationship. 2b) Our
program will also allow the generation and display of diagrams of these
structures e.g. drawing lines with arrows linking cells that are linked by
formulae. Furthermore, grouping, shading, and boxes can be used as visual
display cues to indicate these relationships, instead of using the uniform grid
layout of standard spreadsheets where proximity is the only indication of
relationship between non-blank items.
Basic spreadsheets have items with three implicit fields: a value, a
formula, and a format (e.g. treat this value as a date). They also allow their
users to type in text to cells, and this is often used to put text next to
figures allowing the proximity to represent the association of labelling. Our
program will associate labels with items in the underlying data, and then allow
users to control whether and how the labels are printed in any particular
display. Furthermore, we shall of course apply all the ideas discussed earlier
for temporal and status attributes, plus an underlying layer of data
manipulation event records.
The proposed program will thus support a variety of displays of both the
structure and the details of the data and calculations it holds, and provide
the user with control over the detail shown both on screen and on printouts.
This will allow very detailed versions for the accountant's use, with summary
printouts as required. However these printouts can, at the user's request,
include not just figures but associated labels, and justifications. It thus
will aim to maintain and improve spreadsheets' ability to serve both as a very
personal calculation tool, and a tool for producing printed reports to be
handed to others for various purposes.
We request 2 years of an RA, as explained in the person-month
allocations in the work plan above. We shall hire a competent programmer, and
use an in-house training course to get them fluent with Macintosh toolbox
programming (which will also require us to buy reference books). We hope to
hire a newly qualified PhD, so that the theoretical analysis will not be too
dependent on ourselves. The RA will need a medium power Macintosh for
development, and a suitable programming environment e.g. THINK C. Printing and
backup will need to be covered by a consumables charge from the department
services. In addition, we shall wish to buy a selection of low-end spreadsheet
and accountancy software to analyse for domain relevant features. If our own
prototype does not match the market norm at that time in relevant functional
respects, we may not be able to secure adequate user cooperation.
10% of a technician and 5% of a secretary are required to support this project,
over and above that provided by a well-founded laboratory. Technical support
includes purchasing, installing and configuring dedicated equipment, software
maintainance, filestore support, troubleshooting and technical consultancy.
Secretarial support will include handling reports, travel and correspondence
for the project.
The main output will be development of the theory and practice of data
modelling, together with the links between that and user interaction. We
therefore expect to publish papers, attend the named database and HCI
conferences, and perhaps to distribute our prototype software as a
demonstration. With the modest resources requested here, we cannot produce and
support a commercially robust product. We hope however that the ideas will
contribute to developments in the practically and commercially important area
of database and accountancy software.
Steve Draper is a lecturer, teaching HCI. He entered the HCI field
under Don Norman in San Diego, with whom he edited "User Centered System
Design". His aim is to carry through an inter-disciplinary approach to HCI,
and his recent research grants include one from SERC with a computer science
emphasis on building a toolkit for iconic interfaces, and one from SERC/DTI
with a psychological emphasis on adapting psychological methods to the
measurement of user interface performance. At present he has a grant from JCI
with 3 others on Temporal Aspects of Usability, researching the effects of
response delays on users. Some publications are in the reference list.
Richard Cooper is a lecturer, and has worked extensively in the area of
usability of database systems, concerning himself with both data modelling and
user interface issues. To this end he has been a technical leader with ESPRIT
projects COMANDOS (Projects 834 and 2071) and FIDE (Project 6309) and has run
the SERC-funded Configurable Data Modelling (Grant HR17671). He has also
initiated a series of International Workshops on Interfaces to Database
Systems
Cooper, R.L., Atkinson, M.P., Dearle, A. and Abderrahmane, D. (1987)
"Constructing Database Systems in a Persistent Environment" Proc. of 13th
International Conference on Very Large Databases, Brighton, September.
Cooper, R.L. (1990) "Configurable Data Modelling Systems", Proc. of 9th
Conference on the Entity Relationship Approach, 35-52, Lausanne, October.
Cooper R.L. (1993) (ed.), Proc. of the International Workshop on
Interfaces to Databases, Springer Verlag.
Draper, S.W. & Waite, K. W. (1991) "Iconographer as a visual programming
system" in HCI'91 People and Computers VI: Usability Now! (eds.)
D.Diaper & N.Hammond pp.171-185 (CUP: Cambridge).
Draper S.W. (1993) "HCI and database work: reciprocal relevance and
challenges" In R.Cooper (ed.) Interfaces to database systems (IDS92),
Glasgow 1-3 July 1992 pp.455-465 (Springer-Verlag: London)
Draper, S.W. (1993) "The notion of task in HCI" pp.207-208 Interchi'93
Adjunct proceedings (eds.) S.Ashlund, K.Mullet, A.Henderson, E.Hollnagel,
T.White (ACM)
D.A.Norman & S.W.Draper (1986) (eds.) User Centered System Design
(Erlbaum: London).
Paton N., Cooper, R.L., WIlliams, H. and Trinder, P. (1994) Database
Programming Languages, Prentice Hall.
Suchman, L.A. (1983) "Office procedures as practical action: models of work
and system design" ACM transactions on office information systems vol.1
pp.320-328.