in Sept. 1997 around the topic of constructivism. (Link to list of ITFORUM participants.)
I like the rejection, particularly as applied to constructivism, because the part of that which I find most acceptable is its idea that education and training should seek to connect new material to a learner's existing knowledge and experience. Abstract and/or philosophical arguments don't work for Dave, and above all do not address our own personal existing experience of learning and instruction: so of course even if we are constructivists we shouldn't be using them here, as constructivism itself would predict that they will be ineffective.
So what in my own experience made me raise the issue? Several things:
SCIENCE
The second reason is to do with the interaction of parts of the problem. A top down design method only works in domains where each part of the problem can be solved independently, and the solution to one part has no effect on the solution to another part. In computer science this is an important ideal, because making parts (subroutines, objects, whatever) independent makes both for resuable code and for tractable testing of components. But even there it is in fact problematic. (Thus Knuth has argued for design, not top down, but in order of difficulty, so that the most problematic part is tackled first in order to discover any knock-on implications for other parts.) Other areas of design do not even have this as an ideal, but are closer to recognising that the heart of design is dealing with interacting constraints. Instruction should recognise this too. Items of knowledge are not independent of each other. Even if an instructor convinces himself they are independent, this does not make it any more likely that they are independent in the mind of the learner, as my example of friction and Newton's laws illustrates. It just means the instructor is designing for himself, not for the learner. Similarly the question for instruction on milk valves is what connections each learner is likely to make, and which of them will be helpful, which confusing.
Not only are knowledge items not independent of each other, each instructional action or item may have multiple effects (relate to several items), and conversely Laurillard's model lists 12 activities that bear on each single knowledge item. The relationships here are many to many, and no top down design procedure can cope.
Constructivism: construct new knowledge by attachment to prior foundations.
This is always possible, no matter how detached the instruction. The learner can always attach it to prior knowledge of the words, or at least (for new jargon) the syllables. But such attachment is extremely shallow: a very narrow base. As psychology would predict, learners then show poor retention compared to linking it to a richer network of prior items.
Phil Agre: we want independent subparts to problems because this is such a cognitively advantageous thing. Hence the desire for knowledge lumps and artifacts that show this independence.
The fact is that I'd rather learn to drive from Dave Merrill than from an instructor who urged me to apply my previous experiences of walking, swimming and bike riding to discover how this mode of transport works.
It seems to me that a sensible middle road is to be found in Ference Marton's phenomenography and its development into Diana Laurillard's "conversational" model of teaching, where it is the responsibility of the teacher first to present the matrerial to be taught, then to understand the learner's world view (which may be totally cock-eyed and interfere with learning) and adapt the teaching method through the use of a conversational framework. Not a new idea - Socrates did it. Laurillard indicates ways in which mediated instruction can aim towards this ideal. I'm sure Dave knows it well.
1. Brett Bixler indicated that "Others in this discussion have alluded to ITT and its probable inability to teach higher order thinking skills." I have a little trouble with what we mean by "higher order thinking skills". One interpretation seems to be that we don't know what will be learned. Another seems to be it involves "inference". In my mechanistic way I believe that higher order skills are knowable and teachable. For me there is a hierarchy of such skills including but not limited to the following: concept classification (inference about an object or situation as to whether or not it belongs to a class), prediction (inference about whether a given consequence will follow from a set of conditions). The areas of ambiguity seem to be to be "discovery" or finding new relationships between conditions and consequences or "invention" creating new artifacts which accomplish a given goal. However, once a relationship has been discovered or even in the hypothesis stage representing this relationship in terms of knowledge objects makes the hypothesis or proposition clear so it is possible to determine when the a discovery has been made. In a like manner specifying the operation of an invention in terms of conditions and consequences makes its specification less ambiguous and thus easier to determine when the invention has accomplished the desired goal. I suspect that this formulation makes some of you uncomfortable.
2. Steve Draper's observation concerning "_ cannot disconnect_" versus "_ should not disconnect _" is very well taken. We have versions of the system that operate just as he suggests. If you disconnect the hose when the air pressure is up you suffer the consequences.
It bothers me that the rhetoric of constructivism seems to depend on straw man arguments. Why do those advocating this position insist on telling me what I believe? Let me address one or two of these concerns.
Steve Draper argues for considering the learner. I couldn't agree more. Then he includes the straw man which is not the case. "I take this to be an argument in favor of addressing learners' prior knowledge, as I believe constructivism requires but `instructional design' does not." Steve, every ISD model with which I am familiar has a step to analyze learners, to determine what they know. Gagne hierarchy analysis is an attempt to determine at what point a learner should enter the body of knowledge and skill to be taught and is based on "prerequisite knowledge" as its fundamental principle.
The same theme is stated in the following: "_ why if science is good for instructors it isn't good for learners:_". Steve the science of instruction takes the following format: If you want a learner to acquire a certain outcome then you must engage certain conditions of learning. This is a statement about learners. It does not ignore learners.
Another statement: "_ the same instruction has varied effects on different learners." Steve seems to feel that somehow we instructional designers are ignoring this fact. I wish I had time to pursue this argument in more depth but for now my argument is as follows:
At a deep level, the learning mechanisms of all human beings is basically the same regardless of race, culture, or even time in which the learner lived (at least within recorded history).
At a surface level, the specific knowledge and skill that a given learner has acquired differs greatly.
For me a science of instruction addresses the deep level. What interactions are necessary to engage the learners basic learning mechanisms? If these interactions are not present there will be a decrement in learning. Our work over the past 30 years has been attempt to identify some of the prescriptions that address these deep learning mechanisms. Subscribing to a constructivist or any other philosophy does not change these underlying mechanisms. If the instruction, whether discovery environments or lectures does not include the interactions required by a particular learning mechanism, then the learner will not acquire the desired knowledge or skill.
Now, what about the surface level. It seems to be that content analysis, learner analysis, subject matter analysis, tack analysis, and a number of other techniques are attempts to determine what the learner knows, where the instruction should begin, what prerequisites are necessary. To believe that instructional design is not concerned with these issues is to demonstrate that you do not understand instructional design. Once the curriculum (list and sequence of content items) has been determined, then it is critical that the appropriate instructional strategies (transactions to use our term) are engaged to enable the learner to acquire the knowledge or skill represented by a given content item. Representing these fundamental transactions and the strategies that cause the student to engage these fundamental transactions is the intent of Instructional Transaction Theory.
I appreciate Walt Wager's comments on Gagne analysis. No one knows it better. The other straw man that demonstrates a fundamental lack of understanding of instructional design is that it is strictly top down analysis. Further, that analysis is somehow bad and causes us to miss important learning outcomes. I suggest that lack of appropriate analysis is the problem. If we don't know what we want to teach, if we leave it up to the learner to structure their own learning, is we engage in "ill structured" environments the probably is very, very high that the necessary information is missing from the environment making it impossible for the learner to acquire the desired knowledge or skill. Further, it is very probable that the links between the required knowledge objects are unspecified and incomplete further hindering the engagement of a required instructional transaction to engage the learner's fundamental learning mechanisms. Allowing students to structure their own learning in "ill structured" environments is not a great virtue but abdication of our responsibility as teachers and instructors. Students do not know nor understand their own learning mechanisms. Most environments are not structured to promote appropriate interactions which will efficiently and effectively engage the learners fundamental learning mechanisms. Instruction is the science and technology of determining how to "design" effective learning environments. Instructional Transaction Theory is such an attempt.
Finally, we are only scratching the surface. We know very little. But I refuse to abandon the quest for such a science and technology.
For anyone interested, I am putting my collected notes on this part of the discussion at: http://www.psy.gla.ac.uk/~steve/constr.html
I think I have 3 points to put about what I was on about. a) Several people said that Instructional designers do investigate what their learners know as part of their design. It sounded however as if that was still within a framework of thinking of knowledge as consisting of independent strictly cumulative items (or objects!). Often this is fine, and it takes care of skipping redundancy and ensuring the learner has pre-requisite knowledge. But it doesn't take account of identifying prior (mis)conceptions that will actively interfere with instruction; nor with identifying what it is the learner already knows that should get connected with the new material (e.g. in what circumstances they have met air hoses before).
b) When I said "top down design" I meant, not whether students could be taught
in more than one order (though that is an interesting issue), but whether the
designer could afford to do the design top down because nothing they did in one
part of the design could affect another part. This is widely sought after in
computer science, and often seems to apply in instruction; but not always. In
my example, I claimed that teaching friction and teaching Newton's laws
strongly interacted, so any method that set goals for these independently and
then designed instruction for each separately would get into trouble.
I am not against setting explicit goals systematically, and indeed refining
them down into small pieces. But I am interested in whether existing design
methods then draw the false though apparently sensible inference that the
pieces can then be addressed independently. A good test is whether any piece
of instruction relates to more than one objective: in reality I sometimes learn
more than one thing from a single learning event. But it is hard to design
systematically for this.
c) Explicit methods. There were numerous comments of the kind "of course
instructional designers do ...". I believe them. Design methods in all fields
are paper procedures executed by skilled humans. That means a huge amount is
NOT explicit in the method: if it were, computers not humans would be doing the
design. I assume instructional design is like that, along with all software
engineering methods. But it is of considerable theoretical interest, and
perhaps eventual practical interest, to identify more and more of what "of
course" has to be done that is not actually mentioned in the method itself.
Someone mentioned behaviorism. That, in this context, strikes a big chord for
me. I once heard a talk on behavioral therapy; what was striking, was that
the speaker was describing what was both clearly successful, and in many ways
intuitively plausible as an approach, but justifying it by a theory that seemed
clearly rubbish. Any theory that helps is good insofar as it supports or
advances successful practice; but it is interesting to try to strive for better
theories too. A very promising place to look is in the gaps between explicit
theory and what is actually done by good practitioners. That was what I was
trying to probe.