Last changed
10 Nov 1997 ............... Length about 5000 words (31000 bytes).
This is a WWW document by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/fund/whyfund.html.
You may copy it. How to refer to it.
(Back
up to central page on the debate)
This is a document liable to frequent update as a prelude to
a debate.
It is intended to be debatable, and reflects at most only my own views
on the "last changed" date shown above. On this understanding, you are not
only allowed but positively invited to copy, refer to, and comment on it.
Why should taxpayers fund HCI research?
Stephen W. Draper
GIST (Glasgow Interactive Systems cenTre)
University of Glasgow
Glasgow G12 8QQ U.K.
email: steve@psy.gla.ac.uk
WWW URL: http://www.psy.gla.ac.uk/~steve
A few months ago at a committee meeting I was put in the position of
being asked to justify funding for HCI relative to other fields. I was not
prepared for this and did a poor job. At HCI'97, a panel (Birch; 1997)
discussed the issue, but its arguments did not seem to me convincing: I may
myself believe it is worthwhile, but I wouldn't have been convinced by the
arguments offered there. Others in the field seemed poor at this too.
Why should we be good at it? There are other academic fields that deal
with this -- with the history of technology, its origins and economic impact --
but that is not our specialism. The people at the Science Policy Research Unit
at Sussex University for instance certainly know better than us about the
theory and evidence of effects of research. The way I mention in this document
the unsystematic collection of things I do shows how much there is to
understand about technological history / interactions, and how ill-qualified I,
and I think most HCI researchers, are. This explains our incompetence at
producing good arguments ...
... but it probably does not excuse us. If we don't justify the money
we take, why should anyone else? I certainly think most other parts of society
should be accountable for the public money they spend, so I guess we should be
accountable for HCI research funding.
Government is asking the research councils to demonstrate the relationship
between research funded and utility to other parts of society ("users"), and
they in turn are asking their researchers to supply them with ammunition. If
we don't do it, then at best someone else will invent a rationale, and
we will have to work to it. So we had better practice this until we have a
satisfactory line, preferably with specific cases and evidence. Note that it
is open to us to identify kinds of users (beneficiaries of HCI research) and
kinds of utility (benefit to such users): doing so is part of establishing the
kind of HCI research we think best.
It is open to us to identify kinds of user (of HCI research) and kinds
of utility (benefit to such users). However, as will be seen, our first
reactions reveal what amateurs we are at this argumentation because we think
first of simple ideas from popular culture (often called "commonsense": the
"common" bit at least is right) that do not stand up to scrutiny. Just because
Thatcher was unaware of the existence of society, thought "good" = "money", and
"benefit" = "profit to industry" does not mean we have to work with those
invalid notions. We should do two things: work on existing arguments to
reject the invalid and refine the valid, and reflect on our experience to
identify what kinds of things are valuable and do work, however unfamilar some
of that may be in terms of what we read in the newspapers as common
arguments.
The first thing everyone thinks of as justifying scientific research is
the invention of something directly exploitable. On this model, government
pays for research, the researchers patent something, industry pays huge
royalties to use the invention.
Everyone can understand this, and it fits the spontanous conception of
how research is useful.
It is a problematic issue who should control patents, and another who should
make money from them. But without patents it is often hard to get much
acknowledgement that there was an idea that benefitted people.
- There is only one area in which this model clearly works:
pharmaceuticals. It is so well understood there, that drug companies pay for
research, even though each particular chemical is a very long shot. The
trouble is, that government funding is unnecessary here because the model is so
clearly applicable. It is untrue that industry is incapable of taking the long
view, of funding low probability research. In this area, the statistics are
clear, and private funding works.
- As far as I know, there are no important patents in HCI. Furthermore, the
inventors have gone unrewarded e.g. Englebart's invention of the mouse. The
GUI interface (Graphical User Interface) has changed everyone's daily life, but
Apple has failed to sue Microsoft for doing a poor imitation of it; and Apple
took a lot of it from Xerox work on the Star and maybe Smalltalk. (But then
again, getting it fast and getting it cheap are very important to users: is it
true that someone "invented" GUIs if they didn't achieve that? Would you use
menus if they took 10 seconds to open?) Doing world-changing research that
improves everyone's life has not resulted in wealth for the researchers and
their sponsors. The WWW (World Wide Web) is another example: the key
breakthrough was adding GUI to the existing gopher network, but no patent, and
little credit.
- Would we want this anyway? Organising an area around patents leads to one
possible social setup. Drugs are organised that way, which means governments
don't have to budget for the huge research costs; but it also means that people
are dying now in South Africa because they can't afford the drugs whose high
price reflects the need of drug companies to exploit when they can. A quite
different social organisation exists around the web, with the policy of
commercial companies giving away free browsers for now.
There is a huge amount of work in turning novel demonstrator software
into a product robust enough for widespread use. The fact that most software
is copyrighted not patented is in approximate correspondence to the fact that
the expression of a good idea in robust software is where most of the
person-years of work are: i.e. in the expression in code for particular
platforms and its integration into software suites, not in the novelty of the
original idea. The way the scientific community writes its history (giving all
the credit to one early worker in the history of each idea) is not a neutral,
accurate, or even sensible way of writing history (Winston; 1986), but an
artificial convention of academic science. Technology transfer requires its
own genius, effort, and probably funding. For instance take penicillin.
Fleming did about 2 days work, if that, and that work mouldered in a journal
for more than a decade. Probably millions of people died unnecessarily while
it mouldered. Then a couple of people decided to work on the question, not of
"what does this fungus do?" but on "are there any bacteriacidal substances
worth turning into drugs?" searched the literature, found the Fleming paper,
did pilots, found American factories to take on production, and made it happen.
This took orders of magnitude more time and money, and it changed the world.
But you probably don't know their names, do you?
It is true that discoveries or inventions of this type do not benefit end users
unless someone works on technology transfer. The question is, how should this
be organised and who should pay for it. It requires as well someone with an
eye for what makes worthwhile applications. In HCI, that would require
reasoning about what HCI ideas could make a real practical difference. There
may not be academic rewards for this (no papers, no promotion). And there are
probably no commercial rewards normally. Commercial reward depends on getting
commercial funding, and that is unlikely to happen for new kinds of
application. Penicillin was not a drug company project, but it may have
started the current structure because it gave an example of how a drug could go
from academic literature into profit making production: if you were to offer a
drug company an idea of this kind now they would recognise it. There are no
such precedents in HCI.
To get this to work in an HCI context might require:
- Fund people (probably academics) to research needs, not techniques first.
- Fund people then to develop application software for that need.
- Demonstrate its value to the target users in their lives (not
just in demos or short lab. experiments).
After that, companies would probably build imitative software into their own
product suites, and the idea would be launched into practice, with the takeup
as evidence of value to society. It would take a lot of time and money, would
not generate new ideas (only establish the transfer of an existing idea).
In summary: Fleming's academic research did benefit end users, but only after
a delay of decades. Furthermore this benefit was wholly dependent on a
different kind of work by others. On the one hand, funding researchers like
Fleming is cheap and eventually brings rewards. On the other hand, who is
attending to the rest of the chain of benefit-delivery and paying for it?
Another case: It is fundamentally unattractive on a short term view for
industry to fund basic research, because everyone (their competitors) get to
use it. A paradigm case of this (as Pavitt, 1996, mentions) is agriculture. A
technological advance in crop varieties, pesticides, or farm machinery can do
and has made big advances in productivity for all the farms in a country: but
it is not in any one farm's interest to do this. Actually some of this
research is now done privately, relying on the patent system to make it
worthwhile for individual researchers / inventors rather than farmers.
Essentially, if the benefits of research are quantifiable, then industry is
likely to fund it provided a patent system allows them to profit from
inventions. If someone is convinced some research would be profitable, but
can't make an argument to anyone else then the patent system allows mad
inventors who turn out to be right to profit eventually. But mostly they can't
afford this. But the essence of research is that you don't know what will be
discovered until after you have discovered it. That is why publically funded
research is necessary, but also why it's payoff is unpredictable. Furthermore,
there is very substantial work to be done in discovering applications: that is,
linking needs to what happens to have been discovered for other reasons.
As Whitby (1991) reminds us, this is closely related to the classic argument
for universities by Newman.
The above discussed the classic model that the benefit to society of
research is in direct applications. What is true is that a good argument for
state or international funding of pure research in all areas including HCI is
that things have been and presumably will be discovered that later lead to
valuable applications. The essential feature is that research allows for, and
indeed seeks out, surprises: discoveries that were not planned. Thus Fleming
discovered penicillin in a botched experiment; an attempt to improve practical
terrestial microwave communication led to the discovery of the universe's
background microwave radiation, now taken as primary evidence for the big bang;
and research in number theory, often seen as the purest of pure (i.e. useless)
maths research, later came to underpin the encryption methods that are
increasingly vital to the widespread use of the internet.
But you cannot judge the value of research on these grounds on
timescales much less than a century. You can't count patents or tell pure
researchers to go out and make an unexpected discovery with vital commercial
potential in another field. You just have to fund pure or curiosity-driven
research, and it probably has to be state funded. This is a general argument,
and will apply to HCI as well.
Furthermore, developing those applications (from pure research) is a
completely different business, requiring research of a different kind: applied
research that begins by analysing or otherwise identifying practical problems
to be solved, then tries to solve them. This is a quite different activity.
Next comes an attempt to develop the arguments for it.
Probably a quite different story. To transfer a technique (e.g. rapid
prototyping, design rationale) does not require fighting with an organisation
about the product or what product could possibly be a worthwhile thing to bet
the company on. It only requires infiltrating teams. Furthermore there are
fairly soft ways in e.g. have a student do something in parallel with the
team's normal practice and compare end products, feed back a report in the new
notation etc.
It might be aided by HCI researchers developing ways of measuring benefits
(what is a better design? a better design process?). Benefits to society
presumably justified in terms of uptake of the techniques. In fact there may
be considerable evidence of this kind already.
This aspect is supported by evidence that employers want to hire graduates with
the latest training, as their way of tapping into the advancing research field.
On this view, research (in the HCI area) is justified by allowing the
researchers to keep on the cutting edge, and the benefit is in keeping the
courses they teach up to date and a flow of trained students moving into
industry. Evidence then would be the association of researchers with teaching,
and of the uptake by industry of their students.
Kealey (1996) offers an argument I think interesting and important. In
fact he was arguing for the abolition of state funding for research, but,
without accepting his cause, I think his paper contains one important argument.
He says that even though the published academic research literature is open to
all (i.e. not classified or censored), in practice only researchers can use it.
So (he argues) companies need to pay for research units so that they have their
own people with access to it. (Furthermore Pavitt (1996) in a reply to Kealey
essentially agrees with this point, when he too remarks on close links with
academia in proportion to a company's own research strength, in order to obtain
know-how not in the literature.) It is partly that it is a job to read the
literature (it takes decades otherwise for published reviews and textbooks
themselves to extract the few really important ideas and present them in a way
that does not rely on having already read the rest of the literature). But it
is even more because to find stuff out you need to talk to other researchers,
and on the whole they aren't interested in talking to those who can't
contribute in their turn. It's boring talking about your research to someone
who has nothing to talk about either because they don't do it or because it's
secret.
If you look at HCI conferences, this seems right. Quite a lot of industry
people come, because they want to check out if there is anything new they
should know about. But the conferences are not all that well organised for
that. And there is limited informal talk with academics. Perhaps a more
interesting "industry day" would feature alternate talks on "what you need to
know" and "what we want to find out". Actual talks about research work in
industry could go in other parts of the conference, as it is interesting in its
own right.
On this view, transfer of anything from research to industry relies on these
human channels. On the no-state-funding model, companies pay for research
units even though they are very unlikely to generate patents that directly
benefit the company, because it gives them a unit of people in contatct with
world research and able to interpret it for the company, alerting it to the
things that might be useful. (A prototypical "knowledge worker" job.) On a
state-funding model, academics should spend some of their time as advisers,
performing this function. They would need to spend some time in the company to
pick up its needs and so to ask themselves the right questions, besides being
available for questions asked of them.
Note how this is different from the idea of doing joint research or joint
applications projects: it is giving advice, and creating a conduit from the
research literature into other organisations.
Note too that hanging out together over long periods of time is probably very
important. Research is, as it should be, unpredictable. It is not organised
for applications. You have to build up a lot of common ground to find the
worthwhile communications: neither side knows in advance the right questions
and answers. Jon Oberlander's remarks at the HCI'97 panel about his experience
seemed to support this. And several people made remarks about the gulf of
understanding between industry and academic people: time and prolonged contact
seem necessary, not precipitate attempts at common practical activities.
- Joint research: probably not a good idea because goals are too
different.
- Prolonged contact and advisory roles: good idea
- Academics doing case studies: a good kind of collaboration: it does
evaluation which probably directly interests the company, and publishes it so
more people benefit.
- Data sharing (Oberlander's favourite): mutual benefit and low conflict of
interest.
I think one implication is that each academic should become a
specialist, not just in a research area, but in an application area. It takes
a lot of effort to know an area well, and until you do you won't be in a good
position to spot applications etc. Louis Pasteur is my role model here.
Developing socially valuable applications is a quite different type of
activity from pure research, even though it may be best done by more or less
the same people. It requires people who know or study what is wanted, but who
also know the pure research literature which is organised quite differently;
and who think about what the latter might offer the former. The example of
penicillin gives one graphic example of the two roles. The argument about
researchers as interfaces to the literature is another. It would be promoted
by having pure researchers (whether academic or privately funded) spending some
of their time on pure research, and some involved in industrial applications.
How do and don't these general arguments apply to HCI? Does it have any
special features that mean different arguments should be developed for it?
HCI is an applied area, in fact an area defined by an application not a theory.
So justifications and suggestions for interactions between research and
practice may be different here than in many other areas of science. At first
it might seem that it is an application of cognitive science and computing
science (and perhaps sociology). That was the view of many in the 1980s. In
my opinion that view has been pursued and found almost entirely to have failed.
Applying existing pure theories has been tried and has almost always found to
lead nowhere. For instance, most famously, the GOMS model attempts to use
cognitive theories to predict user execution times. It can do this (at great
effort), but in my view is still mostly an irrelevant failure because in only a
few applications is it true a) that user exeuction time is an important aspect
of overall usability b) users are observed to carry out a single predictable
procedure: usually they are very variable both within and between users as to
the procedure adopted. This is typical, as Landauer has commented.
Psychological theories are true in the lab, and almost always turn out to be
unimportant in practice because other factors swamp those lab. effects.
Although it is less clear, I think the same is true in computer science. When
HCI started up, software engineering was dominated by top down or "waterfall"
models of design, and programming languages by functional theories. It is the
area of HCI which, whether acknowledged or not, has moved software engineering
into prototyping and iterative models, and underpinned the current fashion for
object oriented languages.
If HCI was simply an application area, then it would not need its own pure
research, but simply benefit from spinoffs from cognitive and computer science.
But if, as I believe, the above view is correct, then it is the applications
that are driving the theories by raising problems that existing theories cannot
address. That means it is worth funding HCI research independently, although
it also means we should especially promote those who attempt to develop
theories purely of HCI, like the cognitive dimensions of Thomas Green, rather
than applications of existing theories to HCI.
My own experience and views are of this kind. Firstly I believe attempts to
apply prior theory from other fields have not proved useful. Secondly,
following Landauer (1987, 1991), I believe that associated pure disciplines do make
a big contribution, but from their techniques not their theories. Thirdly, the
salient (if uncomfortable) feature of HCI is that it is driven by technology
and invention, rather than driving it. GUIs and direct manipulation are
phrases from the HCI field to describe what had already arrived, and which
researchers were reacting to. The WWW is an even more striking example: it
happened to HCI researchers, they didn't invent it. But in my view, we do not
understand it: the role of HCI research is perhaps to try to understand the
interaction of humans with computers and other machines that were invented
largely without that understanding. This may seem embarrassing, but it is the
position that biologists are in: studying what someone else invented, yet could
be invented differently. Not to fund any academic research on it is simply to
surrender more completely to an engineering driven by other concerns.
Harold Thimbleby offered a totally different kind of reason for
supporting HCI research. Basically, HCI researchers should train the public to
complain more about the quality of machine interaction i.e. public education
and consciousness raising. So a direct social benefit, none of this helping
industry. On the contrary, researchers should train consumers to harrass
industry à la Ralph Nader, forcing them to raise standards (I mean,
benefitting industry by training consumers to give them better feedback about
their products).
I think there is a lot in this. It is a quite different kind of benefit; it
has a plausible story relating research, tax payers, and industry; it relates
to the fact that Microsoft and others really do get very few complaints, and so
are not motivated to improve (unlike, say, the railways).
It might imply funding rather different research to Harold's. It should mean
paying a lot more attention to social contexts of use; and doing a lot of basic
(but interesting) empirical studies on what costs really are to users. Still,
I guess Harold could try to promote attitudes like those in food hygiene, where
tiny lapses that affect hardly anyone cause enormous uproar.
It should also be pointed out that although almost all the money and almost all
the glory in medical research goes and always has gone into doctors and hi-tech
remedies, but almost all the life saving historically has come from public
health measures: sewage systems, clean mains water supply, public health
education, vaccination. Thimbley may be more right than we would wish to
recognise.
Many people remark on resistance to uptake of HCI ideas by companies.
This does not make it different from other areas of science: long lead times
are the norm. And there is a good reason for this: any commercial project must
satisfy many constraints about time and resources, and the HCI is at most only
one aspect. The point of academic research is to find what can be discovered
when those other constraints are ignored; but applications are about finding
cases where findings can be re-integrated with other constraints. This
normally takes time. The start of projects not the middle is obviously a
better time; when team members are being re-trained, not when they have been
redeployed for a quick new job with no training time. Just like the stock
market, commercial success in the short term has huge fluctuations unrelated to
underlying quality, but in the long run it is the other way round: companies
with some real competitive advantage tend to win out. If we are looking for
benefits from HCI research, we can only expect a long time scale to show
them.
However there may still be a problem in identifying credit. Because
most practical results of benefit are the effect of many factors and
constraints, how can we show that HCI research was an important one of those
factors? If someone says they don't use HCI research, that doesn't prove they
haven't benefitted: just that they are not aware of any such influence. But
then most people getting on the plane for a holiday would not say that it is
all thanks to Frank Whittle's research, even though it is.
Assigning credit for innovation is done badly, probably because it is hard to
understand the interplay of all the contributions. Academic conventions make
assigning some credit obligatory in science, but it is probably not done well
there either. Winston (1986) shows just how complex the story is for a few
major inventions such as the telephone.
In the end I have only dealt with 5 arguments.
- The pharmaceutical model of direct invention, which I reject as a useful
justification for HCI research.
- The classical tricklethrough model which I support. It is hard to gather
evidence of success here; and we must expect long time scales (and HCI is
still very young). It justifies the support of pure research.
2b) Researchers as human interfaces to research knowledge. I think this is
important. In the end it is mainly another argument for supporting both pure
and applied research.
- Training: researchers create methods, and train others in them. OK, but
methods are only one kind of research, although an important one in HCI.
- Educating the public: Thimbleby's assertiveness training model, or a
"public health" version of it.
- HCI is the study of technology it did not itself invent. We should get used
to this, and think of it as a kind of cognitive biology: the study of someone
else's designs.
Birch,N. (1997) "What's the use of HCI?" in HCI'97 conference
companion (eds.) P.Thomas & D.Withers pp.8-10 (University of the
West of England: Bristol)
Kealey,T. (1996) "You've all got it wrong" New Scientist vol.150
no.2036 29 June pp.22-26
Landauer, T.K. (1987) "Relations between cognitive psychology and computer
system design" ch.1 pp.1-25 in Interfacing Thought
(ed.) J.M.Carroll (MIT Press: Cambridge, Mass.).
Landauer, T.K. (1991) "Let's get real: A position paper on the role of
cognitive psychology in the design of humanly useful and usable systems" ch.5
pp.60-73 in Carroll J.M. (ed.) Designing Interaction: psychology at the
human-computer interface (Cambridge, UK.: Cambridge University press).
Pavitt,K. (1996) "Road to ruin" New Scientist no.2041 3 Aug.
pp.32-35
Whitby,B. (1991) "Alice through the high street glass: A reply to John
Barker" Intelligent tutoring Media vol.2 no.2 May pp.71-74
Winston, B. (1986) Misunderstanding media (London: Routledge and
Kegan Paul.)