Web site logical path: [www.psy.gla.ac.uk] [~steve] [this page]
7) Which brings me to the next point. The main finding in studies of instruction (both "scientific" and in my own experience: and perhaps yours too?) is that of variability. The same instruction has varied effects on different learners. That for me is THE big finding. That means if instruction is a science, it is certainly not an exact science. And I believe a major source of that variation is the different prior experience and conceptions of different learners: and that, in my view, is the key point of constructivism. (However it has to be said that it is not the only source of learner variability: time of day, nutritional state, and above all interest in learning this material are at least as important: so constructivism can only be one of the theories we need.) So it seems to me that goal oriented approaches to instruction are in fact, contrary to Dave's claim, NOT scientific insofar as they ignore major scientific findings.
I think we should call a spade a spade, and call ID engineering. I don't think it a wise move to use Herb Simon as an excuse to call it science:
The prospects aren't good though. Science has achieved a lot of unifications in the last few centuries (though none are complete yet); but I know of no engineering areas that have got anywhere close at all. Still, the past is not a certain guide to the future.
Recognising that we are doing engineering not science is also important in recognising the fact that our knowledge is incomplete and going to remain that way for the forseeable future, and that this most fundamental of all facts about the area must be taken into account. A designer might wish that psychology had delivered a predictive theory of learning of a form directly usable, but that is not the case. Science may wait for the truth to arrive (or actually, go off and study something useless but tractable like galactic structure), but design and engineering are the activities that organise something useful with the incomplete knowledge available. This is nothing special about ID. Some crucial factors in the design of the Boeing 777 were discovered in tests, not from theory or simulation: that is why Boeing spent millions on tests, because they yield design knowledge and theories are incomplete. Similarly for structural engineering (think about this next time you cross a bridge): "Engineering is the art of modelling materials we do not wholly understand into shapes we cannot precisely analyse, so as to withstand forces we cannot properly assess, in such a way that the public has no reason to suspect the extent of our ignorance." [A.R.Dykes presidential address 1976 to the British Institute of Structural Engineers. Quoted in an Equinox TV programme Sept. 1992.]
For me, the most important part of a prolegomena to better ID knowledge would be to recognise this state of uncertainty. It should have direct implications for the design methods we use: expect and budget not just to test to gain a quality stamp, but to test to discover important features unpredicted by theory that a design must be modified to accommodate. I think this point touches some remarks from and to Joe about the role of the learner in this: staying close to observations of the artifact in use is important in all successful engineering. Fantasies about "science" will simply hold ID back from effective design activity and methods, and indeed from learning more about instruction. In fact "science" is the story engineers tell nervous users: but not what they tell themselves if they are going to build safe and useful artifacts.
So: in my view ID is engineering not science. It is quite clear that in many ways Philip and other discussants know quite well that ID is about construction, but in mistaking the term to use, they go on to false analogies and so fail to analyse the kind of theory and the kind of unification that is required: different from that required in science.
Philip said "science as disciplined enquiry": quite. That is exactly why ID is NOT science: it is primarily construction, inquiry is subordinated to that. Science is the opposite: primarily enquiry, with construction (e.g. of clever and original experimental apparatus) subordinated to enquiry.
Lloyd said: ID is also an art. But all engineering, as the best engineers repeatedly say, is in large part art. Architects again are a crucial example to ponder. And design is taught (in the UK) in Art schools as much as in engineering departments. He is right: but being an art does NOT mean in it is not engineering, and it certainly does mean it is about construction not experimental data.
Of course I would love to do that, because I love to feel I understand (a joy quite independent of whether learners benefit from anything I do): this is a reference back to my personal position.
A different kind of reference is about cognitive vs. behavioural which Lloyd mentioned in passing. I agree with him that from a practical point of view, both get to be used especially by bricoleurs (Joe) i.e. by practical people who do what works rather than care about philosophical arguments (e.g. about why there can't possibly be moons round Jupiter even though you can see them in Galileo's telescope). I think why both work is exactly because reflection is a profound fact about mental life: we don't just act because we have ideas and plans, we equally notice our own behaviour and draw (cognitive) conclusions from it. So if you change my behaviour, it will probably change my ideas just as much as vice versa. And what is good for clients and learners, is good for professionals and academics too.
However see also Latour,B. (1987) Science in Action (Open University Press) for a view of how science as a whole social activity connects the two. Getting any intellectual attention, never mind physical resources, requires many groups in society to be interested in a scientific activity. A scientific theory is an abstraction so pared down that it can be used by very different interest groups.
Some of the conditions in a given task will happen or be satisfied without being specially considered or arranged. Others will be the focus of designers' work to bring them about. And of course in many cases, fixing one condition obstructs or undoes another: conditions (subgoals) often interfere with each other, which is what makes design non-trivial.
Some of these decisions do not affect others; some do then constrain others.
There are several possible dimensions for mapping a design space.
Top down design is a method, formerly favoured in computer science, that means going from the most general functional requirements first, down to more detailed decisions. This is only possible and practicable if the means (the computer on which it is to be implemented) impose no constraints at all on what is done.
A good way of thinking about them is that at most they emphasise one side of the design method; it may well be that two different methods emphasise two aspects both of which are important, and both of which are in fact necessary. This shows up if you observe real practitioners. While bad designers produce poor designs no matter what "method" they use, good designers can often be spotted adding silently (even unconsciously) aspects of method 2, so advocates of method 2 can say that is why they succeed even though they say they are using method 1. (What those advocates may miss is that they have to add aspects of method 1 to their own practice if and when they produce good designs themselves.) We have to read paper design methods (which is all that software engineering, for instance, consists of) as explicating a partial truth, and should be valued for that partial truth; but it is not relevant to criticise them for omitting important things: all design methods do. This is consistent with the quote from Gagne / Briggs someone else posted.
(Of course as theories, all such methods are pathetically incomplete; but then they are useful, which theories in education have trouble being.)
[An example, though not necessarily an interesting one, is the issue of cost effectiveness and keeping within budget that we discussed a while ago. Most design methods do not mention this, but take it for granted that their practitioners will do this "outside" the method as written down.]
c) Explicit methods. There were numerous comments of the kind "of course instructional designers do ...". I believe them. Design methods in all fields are paper procedures executed by skilled humans. That means a huge amount is NOT explicit in the method: if it were, computers not humans would be doing the design. I assume instructional design is like that, along with all software engineering methods. But it is of considerable theoretical interest, and perhaps eventual practical interest, to identify more and more of what "of course" has to be done that is not actually mentioned in the method itself.
For me, there is a fundamental point to keep in mind. Design "methods" are written down in natural language (English) and executed by humans. If those written instructions in fact contained a full description of the actions to be taken, they would be executed by computers and designers would be out of a job. This is in fact an active dream of some ("automatic programming"), but it ain't happened. The fact that methods are written in English and not in C or LISP shows that essential parts of the design activity are unspecified: no-one knows what they are. On the day that design practice mirrors the written "methods", designers are redundant, useful only if they will take lower wages than machines. The only surprise is that this surprises some people: they are the naive ones who actually believed the "method" to have higher status than that of heuristic advice.
Another indication is the scarcity of scientific experiments on methods: where you produce some designs with the method and some without and compare the results. Why are methods disseminated on the basis of exhortation rather than of data? (In software engineering as well as IT.) Most of us react to this challenge, not by feeling embarassed at having accepted something without any evidence, but by making excuses about the difficulty of such experiments. This shows two things. Firstly, the human demand for a method is independent of its quality: most people want one, and will take the best they can find with no lower limit on, ultimately with no requirement for, quality. (Just as the demand for doctors predated their having any objectively beneficial effect by several thousand years, and still shapes the medical industry.) It is demand driven, not supply driven. Secondly, those "difficulties" are equally clues about what gets added to the paper spec. of a method in the guise of commonsense, imitating past successful practice and conventions, etc. The experiments are "difficult" because so much gets added outside the method it is not clear that the "non-method" group will do worse, nor how you stop them doing all the good things supported by the method but claiming that they were just due to good sense not any method.
The interesting intellectual (not practical!) issues are in trying to identify what it is that human designers are adding to the "method". One kind of thing is (1) constraints vital to the activity but not mentioned in the method e.g. finishing the project on time and on budget. Another kind of thing is (2) in making judgements such as that a paragraph of text is clear and comprehensible to the target learner group: that kind of judgement in fact requires unlimited amounts of knowledge about the world and language of those learners. You can call the latter "context" or "situated knowledge" or "general knowledge": in all cases we have no idea how to put it in a computer, and perhaps no idea how to teach it to a trainee designer if she doesn't happen to have it already. Now you can add your own further types of thing that human designers add to "methods" in their execution. My next two would be (3) discovering and using knowledge of prior conceptions in learners in this area, using (3a) phenomenography or (3b) not a SME but an experienced teacher who has discovered which bits are hard to get across; and (4) simple knowledge of the students' current context e.g. this year all my examples in my HCI class have to be about the WWW but two years ago, would have been Hypercard.
8) The essential problem with "instructional design" of the Gagne school is that it is strictly top down. It takes instructional objectives and subdivides them in a top down fashion, ending up with a set of small items, for each of which a separate instructional action is taken. This is like the design methods that used to be taught in computer science, but are now largely discredited even there. There are two separate important reasons for the failure of top down design from functional requirements.
The first is the importance of human users. Most computer applications nowadays revolve around human users. We do not have a predictive theory of how humans will behave. Consequently when you build a piece of software for humans, you have to observe whether or not the interaction works well, and when you have seen the problems, you need to modify the design. This is an iterative, protoyping, design method. Instruction is for humans, and the same principles apply: we must expect any design to need improving in the light of experience. To advocate a design method without such iteration is in fact anti-scientific: it is to refuse to allow any place for testing against reality.
The second reason is to do with the interaction of parts of the problem. A top down design method only works in domains where each part of the problem can be solved independently, and the solution to one part has no effect on the solution to another part. In computer science this is an important ideal, because making parts (subroutines, objects, whatever) independent makes both for resuable code and for tractable testing of components. But even there it is in fact problematic. (Thus Knuth has argued for design, not top down, but in order of difficulty, so that the most problematic part is tackled first in order to discover any knock-on implications for other parts.) Other areas of design do not even have this as an ideal, but are closer to recognising that the heart of design is dealing with interacting constraints. Instruction should recognise this too. Items of knowledge are not independent of each other. Even if an instructor convinces himself they are independent, this does not make it any more likely that they are independent in the mind of the learner, as my example of friction and Newton's laws illustrates. It just means the instructor is designing for himself, not for the learner. Similarly the question for instruction on milk valves is what connections each learner is likely to make, and which of them will be helpful, which confusing.
Not only are knowledge items not independent of each other, each instructional action or item may have multiple effects (relate to several items), and conversely Laurillard's model lists 12 activities that bear on each single knowledge item. The relationships here are many to many, and no top down design procedure can cope.
I am not against setting explicit goals systematically, and indeed refining them down into small pieces. But I am interested in whether existing design methods then draw the false though apparently sensible inference that the pieces can then be addressed independently. A good test is whether any piece of instruction relates to more than one objective: in reality I sometimes learn more than one thing from a single learning event. But it is hard to design systematically for this.
And here I will repeat a point that Clark made, so important we should each repeat it: that what's good in training is good in professional practice and vice versa: that being able to learn and play separately in these areas, and together in specific projects is important for being as well as becoming, for professional practice and for training.
The analogy with writers' processes makes this almost clear.
Despite the fact this is obvious, a lot of people don't get it. For instance, a lot of work is done on providing software support for design, but most of it is undermined by providing one tool, as if there were only one process to support. Then you get findings such as: if you want to support recording design for future re-use and for safety critical checking, it needs to be structured and attributed; but if you introduce an anonymous tool, suddenly it gets more used and better results. Multiple cog. processes are essential.
A simplistic model of this would be: a designer must both do hypothesis-generation (e.g. by brainstorming), AND hypothesis-testing/checking (by systematic methods).
Another aspect is that if you take any design "method", then this turns out to be not a computer procedure, but a list of paper instructions that a computer can't execute, but a human designer is supposed to. The paper specifies some aspects of what gets done; the human adds complementary aspects (typically "commonsense" stuff about what the paper instruction "really means", but amazingly a computer can't actually do). Again, two quite different kinds of cognitive process interacting here to produce a design.
Probably independently of this, is the issue of re-using standard solutions vs. being "creative". I agree with everything Henry said on this, and think "creativity" may be an epiphenomenon in the eye of the beholder; but the important issue is what Gary said: the desire to do something non-boring because non-standard. This is central to design, because design depends so heavily on old habits and solutions.
This sounds like Lloyd's points about mixed backgrounds. If I take this seriously and overgeneralise spectacularly beyond the data, then this would imply a) IT should only be taught at postgraduate level: never as a first degree no matter what the demand; b) perhaps those postgraduate courses should refuse admission to those who had only done education or computer science, and only take the anthropologists, physicists etc.; or at least people who had substantial work experience at something. Too quixotic: who would turn down applicants to a course they really wanted to promote? But then, just think what a joy it would be to teach on a course where every student was presenting a different and constructive perspective, rather than acting like a blank sheet and believing whatever you happened to say. Lloyd I notice was careful to avoid drawing any conclusions for action from his observations on mixed backgrounds.
This is because design requires everything to be satisfied; so no one specialism will be enough in reality, or best..
In the UK, teaching in the Art schools is mainly by practical work which is then critiqued by staff and other students. Practical work then feedback, not exposition of theory. (I have not witnessed, much less experienced, this; but it is a strikingly different learning and teaching approach than that which I am used to. I have to keep reminding myself about it.) This is not science: it is apparently the effective transmission of a body of practice, but does not revolve around explicit, formal, public concepts expounded by teachers and re-expressed by learners in language.
Is this the model that Lloyd wants for IT?
Lloyd says he takes the Wright brothers as a role hero. My own current role hero is Louis Pasteur. It used to be Newton and Einstein, but Pasteur managed to do work that simultaneously was economically important applied science for brewers and farmers, and the deepest pure science. (My recommended reading would be G.L.Geison "The private science of Louis Pasteur". It claims to be the first post-hagiographic account (i.e. not produced by his family publicity machine), and is certainly a story with everything: Saving lives, supporting whole industries, ethical dilemmas that could get you jailed today; the struggle for funding; huge success at this by some measures: he came to control personally 10% of the total national science budget; international acclaim: what's a piffling Nobel prize compared to having a whole research institute built for you personally by international voluntary subscription; tragedy: it came too late and he never got to work in it.)
The difference may be this: Lloyd said the Wrights did scientific experiments, but while you may see the name of Pasteur in science texts, you don't hear the name of Wright in aerodynamics courses the way you hear of Bernoulli (say). They may have done experiments, but did these lead to general theories or just to better applied devices? (Lloyd said their focal contribution was a controllable aircraft: but they used a forward canard, which was very soon dropped from practice because it is inherently unstable. Have I got that right? So they demonstrated controllability, but their "principle" was quickly abandoned, at least for the next 80 years or so, for fundamental theoretical as well as practical reasons. In other words, can't I argue that their technique was a dead end like the Dodo, the Stirling cycle engine, and the steam road automobile?, and that this was so exactly because their control theory was rubbish (unstable).)
If that distinction holds up, which model should be applied to IT? What do we want: to use science, and art too, to support craft; or to do new science?
On the other hand, the Wrights produced an existence proof in the form of a working demonstration that anyone could see with their own eyes (or these days, on TV): these were and are infinitely more powerful than any publication (to allude back to Lloyd's gloom at the citation index) at changing people's minds. But Pasteur did this too in effect, in his mass audience public experimental demonstrations of vaccination. Convincing people is one thing, and important; but is a separate issue from the generality of the principle demonstrated.
Web site logical path:
[www.psy.gla.ac.uk]
[~steve]
[this page]
[Top of this page]