Last changed 7 Feb 1999 ............... Length about 9,000 words (53,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/design.html.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [this page]

Notes on Design

Contents (click to jump to a section)

Preface

This is mainly just chunks written as emails, collected with a view some time of putting together some notes or even an essay.

Previous documents on design

My list of headline points

Science / theory

Doing science

My view of science is that it is essentially an appeal to experience external to one's self that pushes back at you and so may confirm or contradict one's ideas. The first point is that an explicit appeal to "science" is itself an appeal to authority rather than to evidence, and so suspect and indeed almost self-contradictory. Furthermore in this context it raises the question of why if science is good for instructors it isn't good for learners: why don't we expect them to connect what they are told to their own experience? If we do, then we are adopting a constructivist approach, and must be ready to deal with some learners who have (for example) had experience of disconnecting a live air line, and others who have not and will therefore respond differently to the same instruction.

7) Which brings me to the next point. The main finding in studies of instruction (both "scientific" and in my own experience: and perhaps yours too?) is that of variability. The same instruction has varied effects on different learners. That for me is THE big finding. That means if instruction is a science, it is certainly not an exact science. And I believe a major source of that variation is the different prior experience and conceptions of different learners: and that, in my view, is the key point of constructivism. (However it has to be said that it is not the only source of learner variability: time of day, nutritional state, and above all interest in learning this material are at least as important: so constructivism can only be one of the theories we need.) So it seems to me that goal oriented approaches to instruction are in fact, contrary to Dave's claim, NOT scientific insofar as they ignore major scientific findings.

Not science but engineering

[Comments on Duchastel's paper]
Theories, if any, of instructional design are never going to be similar to theories in physics. They are not theories of the external world independent of humans, but proposals about methods for constructing artifacts. They are part of an enterprise which, if it is or becomes rational and methodical enough, we might call engineering.

I think we should call a spade a spade, and call ID engineering. I don't think it a wise move to use Herb Simon as an excuse to call it science:

  1. because Simon was and is most concerned with Artificial Intelligence and Economics: not with building bridges or instructional software. That is, with research enterprises and not with producing artifacts of immediate practical value.
  2. What would you call engineering? Rename it "mechanical science"?
  3. Philip Duchastel raises politicial and sociological aspects, so it will be no surprise to him to recognise some of the factors that underlie the terminological inflation I suggest we counter. Using the terms "science" and "theory" is rather laughably self-flattering, but it gains more respect from those outside the field and then more money for us academics who publish such things. Thus for fund-raising, it is a good move; but for clear thinking, it is not.
  4. So, most importantly, the analogy with science (e.g. physics) is a false analogy leading to wrong conclusions. A science theory, at any rate a physics theory, aims to identify something (e.g. a force) that is universally applicable: true everywhere, however insignificant in some situations. It makes no attempt to give a complete account of a single situation, but rather a single account of part of every situation. In engineering, we need exactly the opposite: we need to know all but only the factors that have a significant causal impact on the situation we are examining.

This matters in this discussion, because the impulse most of us (but not Joe) feel towards a unified theory is rather different from that in science. When I observe a piece of CAL in action in a classroom, I long for a theory not to make quantitative predictions but to specify all the factors I need to worry about, so that I observe the relevant things; and so that I am in a better position to propose improvements. Without that completeness, I am likely to make a change to fix one problem only to ruin some other constraint. Science theories are no help with this (though they may contribute indirectly). What we need are engineering theories, and an engineering-oriented unification.

The prospects aren't good though. Science has achieved a lot of unifications in the last few centuries (though none are complete yet); but I know of no engineering areas that have got anywhere close at all. Still, the past is not a certain guide to the future.

Recognising that we are doing engineering not science is also important in recognising the fact that our knowledge is incomplete and going to remain that way for the forseeable future, and that this most fundamental of all facts about the area must be taken into account. A designer might wish that psychology had delivered a predictive theory of learning of a form directly usable, but that is not the case. Science may wait for the truth to arrive (or actually, go off and study something useless but tractable like galactic structure), but design and engineering are the activities that organise something useful with the incomplete knowledge available. This is nothing special about ID. Some crucial factors in the design of the Boeing 777 were discovered in tests, not from theory or simulation: that is why Boeing spent millions on tests, because they yield design knowledge and theories are incomplete. Similarly for structural engineering (think about this next time you cross a bridge): "Engineering is the art of modelling materials we do not wholly understand into shapes we cannot precisely analyse, so as to withstand forces we cannot properly assess, in such a way that the public has no reason to suspect the extent of our ignorance." [A.R.Dykes presidential address 1976 to the British Institute of Structural Engineers. Quoted in an Equinox TV programme Sept. 1992.]

For me, the most important part of a prolegomena to better ID knowledge would be to recognise this state of uncertainty. It should have direct implications for the design methods we use: expect and budget not just to test to gain a quality stamp, but to test to discover important features unpredicted by theory that a design must be modified to accommodate. I think this point touches some remarks from and to Joe about the role of the learner in this: staying close to observations of the artifact in use is important in all successful engineering. Fantasies about "science" will simply hold ID back from effective design activity and methods, and indeed from learning more about instruction. In fact "science" is the story engineers tell nervous users: but not what they tell themselves if they are going to build safe and useful artifacts.

So: in my view ID is engineering not science. It is quite clear that in many ways Philip and other discussants know quite well that ID is about construction, but in mistaking the term to use, they go on to false analogies and so fail to analyse the kind of theory and the kind of unification that is required: different from that required in science.

Philip said "science as disciplined enquiry": quite. That is exactly why ID is NOT science: it is primarily construction, inquiry is subordinated to that. Science is the opposite: primarily enquiry, with construction (e.g. of clever and original experimental apparatus) subordinated to enquiry.

Lloyd said: ID is also an art. But all engineering, as the best engineers repeatedly say, is in large part art. Architects again are a crucial example to ponder. And design is taught (in the UK) in Art schools as much as in engineering departments. He is right: but being an art does NOT mean in it is not engineering, and it certainly does mean it is about construction not experimental data.

Art, craft, science, and reflection

To science or not when doing IT. Well the issue is whether to try to create theory or not. You can use both science method in the sense of experiments, and use science theory as supports (along with craft techniques, and aesthetic judgements from art) in IT; but the question is whether to reflect on what you have done and make an effort to extract new but general theory from it. This would be good for the field (published literature), and good for the (reflective) practitioner if you believe Schon.

Of course I would love to do that, because I love to feel I understand (a joy quite independent of whether learners benefit from anything I do): this is a reference back to my personal position.

A different kind of reference is about cognitive vs. behavioural which Lloyd mentioned in passing. I agree with him that from a practical point of view, both get to be used especially by bricoleurs (Joe) i.e. by practical people who do what works rather than care about philosophical arguments (e.g. about why there can't possibly be moons round Jupiter even though you can see them in Galileo's telescope). I think why both work is exactly because reflection is a profound fact about mental life: we don't just act because we have ideas and plans, we equally notice our own behaviour and draw (cognitive) conclusions from it. So if you change my behaviour, it will probably change my ideas just as much as vice versa. And what is good for clients and learners, is good for professionals and academics too.

The analogy between education and medicine

  • Education is not like physics. It is most like medicine, and it's amazing how in these discussions that is so seldom considered as a fruitful analogy.
  • The first thing about it is that the medical profession was fully frozen into its current form by about 1860 in the UK; while the first scientifically justified treatment was, on an optimistic evaluation, Pasteur's rabies vaccine in about 1870. Medicine may now have a cover story of science, but it is not driven by nor organised by science or "theory". (In fact "evidence based medicine" is a new idea, not yet in widespread use!!!).
  • Medicine is driven by the fear and longing of us patients. It's been like that for (at least) thousands of years. It is still like that. And it is now paying for a huge alternative medicine industry because the mainstream one is sometimes fooled by its own propaganda and fails to address its real function. Perhaps so is education. Children have to go to school because it is a legal requirement and NOT because it has been proved that they benefit, or that we teachers have anything to offer. Nor instructional designers either.
  • The real eternal function of doctors, as I first grasped by reading John Berger's "A fortunate man" (amazing book), is like shamans to provide social recognition for the illness (not cure it). Today, that is still a main function: to provide sick notes allowing us time off work; besides the main one of making us feel the condition has been recognised.
  • What doctors were doing long before science-based interventions was "natural history": "first described the disease in ....".
  • So, what's the difference between ID done by an idiot and by an experienced educational designer? With experience you can recognise symptoms when a prototype performs badly, and even make a good guess about what to change to improve it. This "theory" is a checklist of things to worry about (they MAY matter this time in this design, though perhaps not), connecting similar observations over time and cases. Of course, this is heterogenous, just as Brent remarks; exactly because design of anything is about satisfying multiple issues simultaneously.

    Numerous conjunctive conditions

    Doing anything practical, as opposed to theoretical, such as running a business or building anything — i.e. design, art, or engineering — all require every single one of a vast number of conditions to be satisfied. Pure science /theory is the exact opposite: finding only one thing that is right (true) independent of all other factors (as far as possible).

    However see also Latour,B. (1987) Science in Action (Open University Press) for a view of how science as a whole social activity connects the two. Getting any intellectual attention, never mind physical resources, requires many groups in society to be interested in a scientific activity. A scientific theory is an abstraction so pared down that it can be used by very different interest groups.

    Some of the conditions in a given task will happen or be satisfied without being specially considered or arranged. Others will be the focus of designers' work to bring them about. And of course in many cases, fixing one condition obstructs or undoes another: conditions (subgoals) often interfere with each other, which is what makes design non-trivial.

    Design space

    Design can be thought of as a set of decisions, as a trajectory through a space defined by possible decisions.

    Some of these decisions do not affect others; some do then constrain others.

    There are several possible dimensions for mapping a design space.

    Top down design is a method, formerly favoured in computer science, that means going from the most general functional requirements first, down to more detailed decisions. This is only possible and practicable if the means (the computer on which it is to be implemented) impose no constraints at all on what is done.

    Styles

    One possibility here relates to an idea I've heard about why different designers (or design companies) have different "styles" that can be recognised. If you think of design as a big search space, in which multiple kinds of constraint must be satisfied (e.g. cover the material, do it within budget, do it on the target equipment, dont' bore the learners, ....), then different styles may result just from which of these constraints designers take as the "top" one they address first. For instance, you (Clark, say) could start with an idea about how not to bore learners e.g. present the material via a game. Then covering the content gets done secondarily (and a few bits may never get fitted in). In contrast, a designer who starts with the curriculum, would produce a piece of material structured as a tree representing the curriculum, and attempts to be engaging would be grafted on as details on each screenful. A totally different "style". And each would probably look dead creative to designers of the other school, and customers used to the other school.

    Design "methods"

    Paper procedures

    All design methods are simply paper descriptions of procedures that humans are meant to execute. At least in the area of design, machines can't execute these, and that means that people are adding something important. In other words, such design methods are substantially incomplete as descriptions of the real method being executed.

    A good way of thinking about them is that at most they emphasise one side of the design method; it may well be that two different methods emphasise two aspects both of which are important, and both of which are in fact necessary. This shows up if you observe real practitioners. While bad designers produce poor designs no matter what "method" they use, good designers can often be spotted adding silently (even unconsciously) aspects of method 2, so advocates of method 2 can say that is why they succeed even though they say they are using method 1. (What those advocates may miss is that they have to add aspects of method 1 to their own practice if and when they produce good designs themselves.) We have to read paper design methods (which is all that software engineering, for instance, consists of) as explicating a partial truth, and should be valued for that partial truth; but it is not relevant to criticise them for omitting important things: all design methods do. This is consistent with the quote from Gagne / Briggs someone else posted.

    (Of course as theories, all such methods are pathetically incomplete; but then they are useful, which theories in education have trouble being.)

    [An example, though not necessarily an interesting one, is the issue of cost effectiveness and keeping within budget that we discussed a while ago. Most design methods do not mention this, but take it for granted that their practitioners will do this "outside" the method as written down.]

    c) Explicit methods. There were numerous comments of the kind "of course instructional designers do ...". I believe them. Design methods in all fields are paper procedures executed by skilled humans. That means a huge amount is NOT explicit in the method: if it were, computers not humans would be doing the design. I assume instructional design is like that, along with all software engineering methods. But it is of considerable theoretical interest, and perhaps eventual practical interest, to identify more and more of what "of course" has to be done that is not actually mentioned in the method itself.

    The role of method

    Lloyd raised the issue of the relationship between (1) written design method, (2) "context" / particular application circumstances, (3) and actual designer practice. Again, this is not peculiar to IT but is largely common with software engineering and indeed all design.

    For me, there is a fundamental point to keep in mind. Design "methods" are written down in natural language (English) and executed by humans. If those written instructions in fact contained a full description of the actions to be taken, they would be executed by computers and designers would be out of a job. This is in fact an active dream of some ("automatic programming"), but it ain't happened. The fact that methods are written in English and not in C or LISP shows that essential parts of the design activity are unspecified: no-one knows what they are. On the day that design practice mirrors the written "methods", designers are redundant, useful only if they will take lower wages than machines. The only surprise is that this surprises some people: they are the naive ones who actually believed the "method" to have higher status than that of heuristic advice.

    Another indication is the scarcity of scientific experiments on methods: where you produce some designs with the method and some without and compare the results. Why are methods disseminated on the basis of exhortation rather than of data? (In software engineering as well as IT.) Most of us react to this challenge, not by feeling embarassed at having accepted something without any evidence, but by making excuses about the difficulty of such experiments. This shows two things. Firstly, the human demand for a method is independent of its quality: most people want one, and will take the best they can find with no lower limit on, ultimately with no requirement for, quality. (Just as the demand for doctors predated their having any objectively beneficial effect by several thousand years, and still shapes the medical industry.) It is demand driven, not supply driven. Secondly, those "difficulties" are equally clues about what gets added to the paper spec. of a method in the guise of commonsense, imitating past successful practice and conventions, etc. The experiments are "difficult" because so much gets added outside the method it is not clear that the "non-method" group will do worse, nor how you stop them doing all the good things supported by the method but claiming that they were just due to good sense not any method.

    The interesting intellectual (not practical!) issues are in trying to identify what it is that human designers are adding to the "method". One kind of thing is (1) constraints vital to the activity but not mentioned in the method e.g. finishing the project on time and on budget. Another kind of thing is (2) in making judgements such as that a paragraph of text is clear and comprehensible to the target learner group: that kind of judgement in fact requires unlimited amounts of knowledge about the world and language of those learners. You can call the latter "context" or "situated knowledge" or "general knowledge": in all cases we have no idea how to put it in a computer, and perhaps no idea how to teach it to a trainee designer if she doesn't happen to have it already. Now you can add your own further types of thing that human designers add to "methods" in their execution. My next two would be (3) discovering and using knowledge of prior conceptions in learners in this area, using (3a) phenomenography or (3b) not a SME but an experienced teacher who has discovered which bits are hard to get across; and (4) simple knowledge of the students' current context e.g. this year all my examples in my HCI class have to be about the WWW but two years ago, would have been Hypercard.

    Top down design

    In computer science, this may correspond to the often tacit requirements that come from currently available hardware and software and the customers' requirements about functionality to be delivered. Design methods are often naively top down, saying that design should proceed from the latter, and acting as if the former had no effect on the design, instead of in reality often being at the forefront of both customer's and designer's minds ("I want a multimedia system, oh yes, and it should if possible cover these learning objectives, but never mind if you have to drop one or two"). In reality delivery platforms are often specified in advance — indeed many funding sources have the delivery medium specified in their setup without regard to learner needs — and in any case are always limited to the currently available, which is changing rapidly. Part of the implicit magic of design is deciding on a particular bridge between the two (between the possible and the wanted), but this is seldom mentioned in design methods. But Lloyd correctly identifies the importance of designers having significant immersion in each of these two worlds, before trying to produce designs which must connect them. Many, many courses are hopelessly one sided e.g. computer science and Art schools often train only in what is possible, not in what customers want or need.

    8) The essential problem with "instructional design" of the Gagne school is that it is strictly top down. It takes instructional objectives and subdivides them in a top down fashion, ending up with a set of small items, for each of which a separate instructional action is taken. This is like the design methods that used to be taught in computer science, but are now largely discredited even there. There are two separate important reasons for the failure of top down design from functional requirements.

    The first is the importance of human users. Most computer applications nowadays revolve around human users. We do not have a predictive theory of how humans will behave. Consequently when you build a piece of software for humans, you have to observe whether or not the interaction works well, and when you have seen the problems, you need to modify the design. This is an iterative, protoyping, design method. Instruction is for humans, and the same principles apply: we must expect any design to need improving in the light of experience. To advocate a design method without such iteration is in fact anti-scientific: it is to refuse to allow any place for testing against reality.

    The second reason is to do with the interaction of parts of the problem. A top down design method only works in domains where each part of the problem can be solved independently, and the solution to one part has no effect on the solution to another part. In computer science this is an important ideal, because making parts (subroutines, objects, whatever) independent makes both for resuable code and for tractable testing of components. But even there it is in fact problematic. (Thus Knuth has argued for design, not top down, but in order of difficulty, so that the most problematic part is tackled first in order to discover any knock-on implications for other parts.) Other areas of design do not even have this as an ideal, but are closer to recognising that the heart of design is dealing with interacting constraints. Instruction should recognise this too. Items of knowledge are not independent of each other. Even if an instructor convinces himself they are independent, this does not make it any more likely that they are independent in the mind of the learner, as my example of friction and Newton's laws illustrates. It just means the instructor is designing for himself, not for the learner. Similarly the question for instruction on milk valves is what connections each learner is likely to make, and which of them will be helpful, which confusing.

    Not only are knowledge items not independent of each other, each instructional action or item may have multiple effects (relate to several items), and conversely Laurillard's model lists 12 activities that bear on each single knowledge item. The relationships here are many to many, and no top down design procedure can cope.


    b) When I said "top down design" I meant, not whether students could be taught in more than one order (though that is an interesting issue), but whether the designer could afford to do the design top down because nothing they did in one part of the design could affect another part. This is widely sought after in computer science, and often seems to apply in instruction; but not always. In my example, I claimed that teaching friction and teaching Newton's laws strongly interacted, so any method that set goals for these independently and then designed instruction for each separately would get into trouble.

    I am not against setting explicit goals systematically, and indeed refining them down into small pieces. But I am interested in whether existing design methods then draw the false though apparently sensible inference that the pieces can then be addressed independently. A good test is whether any piece of instruction relates to more than one objective: in reality I sometimes learn more than one thing from a single learning event. But it is hard to design systematically for this.

    Method and what really makes for good design training

    I thought Lloyd's description of some of the things that he values in what has made him what he is in IT also have implications for method and training. He mentions having time to learn techniques (software tools, etc.) and time to play with them i.e. to explore without having a contract to produce. He also mentions some immersion in the opposite end, that of educational delivery, both in primary education and in his family life. I think this represents for IT a polarity that is general in design: design somehow has to bridge what is currently possible from a technique or technology viewpoint and what is wanted or required in the world of the end user or consumer (the learner).

    And here I will repeat a point that Clark made, so important we should each repeat it: that what's good in training is good in professional practice and vice versa: that being able to learn and play separately in these areas, and together in specific projects is important for being as well as becoming, for professional practice and for training.

    Disciplines / training for design

    Cognitive processes and creativity

    John Black asked something like "what is the cognitive process of the designer?" which suckers us into thinking there is one such process. The key points is that design requires MORE than one kind of cognitive process. That is why Gary saying "gut feeling" and others saying "deliberate problem-solving" are both right, not only because both are possible, but because probably both are essential in any one designer.

    The analogy with writers' processes makes this almost clear.

    Despite the fact this is obvious, a lot of people don't get it. For instance, a lot of work is done on providing software support for design, but most of it is undermined by providing one tool, as if there were only one process to support. Then you get findings such as: if you want to support recording design for future re-use and for safety critical checking, it needs to be structured and attributed; but if you introduce an anonymous tool, suddenly it gets more used and better results. Multiple cog. processes are essential.

    A simplistic model of this would be: a designer must both do hypothesis-generation (e.g. by brainstorming), AND hypothesis-testing/checking (by systematic methods).

    Another aspect is that if you take any design "method", then this turns out to be not a computer procedure, but a list of paper instructions that a computer can't execute, but a human designer is supposed to. The paper specifies some aspects of what gets done; the human adds complementary aspects (typically "commonsense" stuff about what the paper instruction "really means", but amazingly a computer can't actually do). Again, two quite different kinds of cognitive process interacting here to produce a design.

    Probably independently of this, is the issue of re-using standard solutions vs. being "creative". I agree with everything Henry said on this, and think "creativity" may be an epiphenomenon in the eye of the beholder; but the important issue is what Gary said: the desire to do something non-boring because non-standard. This is central to design, because design depends so heavily on old habits and solutions.

    Does specialist training produce the worst practitioners?

    A few years ago I got a lift back from a meeting with a woman who managed a department of computer programmers for a control engineering firm. She said on the whole she felt her best (most useful to her) programmers were not those whose first degrees were in computer science, but those who had done something else (like physics) first. The latter were very competent, but regarded programming as a practical activity to be judged by its results in context. The former really believed what they had been taught about software engineering methods, and believed in that process which made them less useful in practice.

    This sounds like Lloyd's points about mixed backgrounds. If I take this seriously and overgeneralise spectacularly beyond the data, then this would imply a) IT should only be taught at postgraduate level: never as a first degree no matter what the demand; b) perhaps those postgraduate courses should refuse admission to those who had only done education or computer science, and only take the anthropologists, physicists etc.; or at least people who had substantial work experience at something. Too quixotic: who would turn down applicants to a course they really wanted to promote? But then, just think what a joy it would be to teach on a course where every student was presenting a different and constructive perspective, rather than acting like a blank sheet and believing whatever you happened to say. Lloyd I notice was careful to avoid drawing any conclusions for action from his observations on mixed backgrounds.

    This is because design requires everything to be satisfied; so no one specialism will be enough in reality, or best..

    Art, craft, or science

    Lloyd raises the issue of whether to regard IT (or any design) as an art, a craft, or a science. This makes an important difference to its status, to its practice, and to how training (or is it education?, certainly education with practice) for it should be organised.

    In the UK, teaching in the Art schools is mainly by practical work which is then critiqued by staff and other students. Practical work then feedback, not exposition of theory. (I have not witnessed, much less experienced, this; but it is a strikingly different learning and teaching approach than that which I am used to. I have to keep reminding myself about it.) This is not science: it is apparently the effective transmission of a body of practice, but does not revolve around explicit, formal, public concepts expounded by teachers and re-expressed by learners in language.

    Is this the model that Lloyd wants for IT?

    Lloyd says he takes the Wright brothers as a role hero. My own current role hero is Louis Pasteur. It used to be Newton and Einstein, but Pasteur managed to do work that simultaneously was economically important applied science for brewers and farmers, and the deepest pure science. (My recommended reading would be G.L.Geison "The private science of Louis Pasteur". It claims to be the first post-hagiographic account (i.e. not produced by his family publicity machine), and is certainly a story with everything: Saving lives, supporting whole industries, ethical dilemmas that could get you jailed today; the struggle for funding; huge success at this by some measures: he came to control personally 10% of the total national science budget; international acclaim: what's a piffling Nobel prize compared to having a whole research institute built for you personally by international voluntary subscription; tragedy: it came too late and he never got to work in it.)

    The difference may be this: Lloyd said the Wrights did scientific experiments, but while you may see the name of Pasteur in science texts, you don't hear the name of Wright in aerodynamics courses the way you hear of Bernoulli (say). They may have done experiments, but did these lead to general theories or just to better applied devices? (Lloyd said their focal contribution was a controllable aircraft: but they used a forward canard, which was very soon dropped from practice because it is inherently unstable. Have I got that right? So they demonstrated controllability, but their "principle" was quickly abandoned, at least for the next 80 years or so, for fundamental theoretical as well as practical reasons. In other words, can't I argue that their technique was a dead end like the Dodo, the Stirling cycle engine, and the steam road automobile?, and that this was so exactly because their control theory was rubbish (unstable).)

    If that distinction holds up, which model should be applied to IT? What do we want: to use science, and art too, to support craft; or to do new science?

    On the other hand, the Wrights produced an existence proof in the form of a working demonstration that anyone could see with their own eyes (or these days, on TV): these were and are infinitely more powerful than any publication (to allude back to Lloyd's gloom at the citation index) at changing people's minds. But Pasteur did this too in effect, in his mass audience public experimental demonstrations of vaccination. Convincing people is one thing, and important; but is a separate issue from the generality of the principle demonstrated.

    Web site logical path: [www.psy.gla.ac.uk] [~steve] [this page]
    [Top of this page]