Web site logical path: [www.psy.gla.ac.uk] [~steve] [EVSmain] [this page]
By
Stephen W. Draper,
Department of Psychology,
Julie Cargill,
and
Quintin Cutts,
Department of Computing Science,
University of Glasgow.
This is a version of a paper given at
ASCILITE2001; and published as:
Draper,S.W., Cargill,J., & Cutts,Q. (2002)
"Electronically enhanced classroom interaction"
Australian journal of educational technology vol.18 no.1 pp.13-23.
The equipment is essentially that of the TV show "Who wants to be a millionaire?": every member of the audience (i.e. each learner in a lecture theatre) has a handset similar to that of a TV remote control, the presenter displays a multiple choice question (MCQ), each learner transmits the digit corresponding to their chosen answer by infrared, a small PC (e.g. a laptop) accumulates the answers, and it displays, via the room's projection system, a bar chart representing the distribution (totals) of the responses to audience and presenter alike.
This may be called (following Michael McCabe) a "Group Response" (GR) system. Its essential feature is that, regardless of group size, both audience and presenter get to know the distribution of responses (alternatives chosen), and how their own personal response relates to that distribution, but however without knowing who chose what. This means everyone contributes, and the representativeness of each response is also exactly known. On the other hand, the privacy of the choice means that, unlike in face to face groups, each individual can express the choice they incline to, rather than only a choice they feel able to explain and justify to others. These are quite often different both in science learning and in social processes.
The main pedagogic categories of use of the equipment are:
In considering large classes in large lecture theatres, the main problem is usually analysed as to do with the lack of interaction and the consequent extreme passivity imposed on the audience. In terms of Laurillard's model of the learning and teaching process (Laurillard, 1993, p.103), this situation fails to support the iterative interaction between learner and teacher that is one of her underlying principles, and more specifically does not support even activity 2: the "re-expression" by the learner of what the teacher has expressed. (This can be seen as corresponding to the constructivist requirement that learners acquire knowledge by rebuilding it on their own personal, mental foundations. Redescribing it in their own terms is an activity that powerfully promotes this.) Actually, with highly skilled learners and a teacher reasonably in tune with the group, this can nevertheless take place: for instance, where the learners take notes that are not mere dictation, but substantial re-formulations of what is being talked about. (This is a reasonable theoretical analysis of the considerable benefits I have often obtained from listening to talks at conferences where I have not asked questions, but have nevertheless learned something useful.) However this degree of skilled, silent interaction is not often present in undergraduate teaching, and large numbers usually prevent learners asking sufficient questions to repair the attunement between speaker and audience, from both a pragmatic (there isn't time for many people to ask questions) and a social (it just feels too embarrassing) viewpoint.
That, then, is the diagnosis offered here of the chief weakness of lecturing to large groups. The handsets and associated equipment offer a way of tackling that weakness by (a) allowing each learner independently to generate an answer (at least a partial instantiation of activity 2), whereas otherwise only the handful who put their hands up really do this; and (b) to register that answer and so maintain the motivation for doing it; and in so doing (c) to affect the course of what happens next. This contingency (dependence of the teacher's behaviour on what the learners do) is true interactivity: one of the underlying principles of Laurillard's model, represented there by the to and fro repetition of activities between learner and teacher. The summed responses are real feedback to the teacher, that naturally leads to adjustments and reattunement if required, and in fact do this better than questions and answers from any subset of individuals. Furthermore the equipment offers an anonymity of response that addresses the shyness that additionally inhibits any interaction.
As mentioned in passing, there are some other reasons for expecting benefits with the types of pedagogic use other than initiating discussions. Formative, summative, and peer assessment could be made more convenient and quicker (and so more affordable for both learners and teachers in terms of time). Starting to build a sense of a learning community could get off to a quicker start, especially in large groups. Demonstrating experimental effects instantly connects the abstract overview given to a personal perception and experience of it: something very helpful to learning both for retention, comprehension, and for a fuller content of learning. The biggest learning gains, however, are likely to come from the much better and quicker feedback from learners to teachers, allowing better attunement of the delivery; and from the method of teaching by questions i.e. of discussions in class (whether in small groups, plenaries, or a combination) initiated by well designed questions and by getting each individual to start by committing to an initial position.
Is the equipment really likely to be any better than the alternatives? The simplest alternative is getting students to give a show of hands. This equipment crucially offers more privacy (it's a secret ballot, and important for just the same reasons). Other rival technologies are to issue each student with a cardboard or plastic cube with a different colour on each face, to be turned to show their "vote"; or with a large sheet of paper divided into a few squares each with a digit in, that the student can hold up in front of their bodies and point to the digit they select. These methods allow only near neighbours to see a student's selection. Thus the electronic equipment offers somewhat better privacy, but the difference may only be crucial with new classes: it is quite possible that with a class grown comfortable with the electronic version, moving over to a cheaper but less private version might not destroy the interactivity. The electronic version also provides faster and more accurate counting of the results: most presenters will only estimate shows of hands to about the nearest 20%, unless they have the patience to pursue and count exactly even with large groups. The accuracy may have a small but not negligible value in making all participants feel their views count, and are not just lost in crudely approximate estimates.
In scrutinising this instructional design rationale, note that it does not feature computers in a starring role (although actually one is crucial to tabulate the results): the instructional design mostly isn't in the equipment or software, but in how each teacher uses it. That is a lesson which perhaps the rest of the learning technology field should take more to heart if the aim is in fact to improve learning rather than to promote the glamour of machines. On the other hand, note too that this design does not fit with a simplistic interpretation of the slogan "learner-centered". Improved learning and the learners are the ultimate intended beneficiaries, but one of the important ways that end is achieved is by first serving the teachers better, by giving them much better, faster, and more detailed information on what the learners are thinking now, and where their problems are at each point.
The exploratory studies should yield practical knowledge such as question banks for the participating disciplines, and how much support is needed for first time use (a new lecturer and students who haven't used the equipment) and for regular use. They will also yield evaluation results on what benefits can be demonstrated. We hope to use a version of the method of Integrative Evaluation (Draper et al. 1996) to address both these aspects.
Some of the most important evaluation issues can be organised around the notion of interactivity. Some researchers tend to an almost mechanical interpretation of interactivity e.g. counting the number and branching ratio of choice paths for users in multimedia learning software (Sims, 1997; Hoyet, 2000). With this equipment, that corresponds to the number of questions put to the learners for them to respond to, regardless of their content. It also corresponds to the effects we may well see of novelty, of the perception that the teachers are taking special trouble over the teaching (the Hawthorne effect; Mayo, 1933), or simply of physiological arousal (the physical activity involved in pressing buttons i.e. mechanical interactivity) which has led to the heuristic rule of not lecturing for more than 20 minutes without a pause, having the audience move around periodically, etc. On the other hand, if we believe in the Laurillard model, then the important factor would probably be the amount of time each learner spends on activity 2 ("re-expression"): so using the handsets should be better than a non-interactive monologue, but not as good as time spent in peer discussion (open-ended verbal responses rather than selecting one of the digits on the handsets). In other words, the measure of it would be the number of mental and verbal responses a learner makes (in discussion) rather than the number of button presses on the handset. On the other hand again, if what is important about "interactivity" is actually changing what happens by visibly affecting the teacher (i.e. genuine human-human interaction with the actions of one party being contingent on those of the other), then it will be changes to what the session is used for as a result of responses to questions near the start that predict the largest learning gains. Varying approaches in classes, and taking independent measures both of learning and of enjoyment or alertness should eventually allow such questions to be decided. Measures taken over time (e.g. weeks) should allow any halo and Hawthorne effects to be independently identified, if they are present, with enthusiasm decaying as the novelty wears off, or performance being independent of the learning activity tried and only dependent on the perceived interest of the researchers.
There seem to have been a variety of particular equipment used in the past, and more than one type is currently available. For instance a one-button system has been used (Massen et al., 1998), though that required each response option for a question to be attended to separately. Various numbers of buttons are offered in other equipment, and sometimes the ability to enter multi-digit responses and transmit them as one number. Wired, radio, and infrared implementations have been used. Currently infrared proves cheapest. Already technically feasible, though not yet financially attractive, is the solution of equipping every student with a radio-linked PDA (e.g. palmtop computer). Functionally, the features that can matter to further pedagogical tactics include: entering multidigit numbers (e.g. to identify the student), entering a sequence of digits to specify a sequence or set of response options rather than exactly one as an answer, and free text entry. When the latter becomes widely available, we can at last address a fundamental problem of discussion groups (such as research seminars) where many people want to ask a question: which is the best question to take for the group as a whole? Using only voice, we cannot know what the set of candidate questions is without having them asked. With textual group responses, everyone's questions could appear in front of the speaker and/or facilitator, and could then be grouped, sequenced, and sorted by priority. Meanwhile, as the technology (especially radio communication techniques) advance rapidly, we can focus on how we would use additional functions, and what their pedagogic rationale is.
Draper, S.W. (1998) "Niche-based success in CAL" Computers and Education vol.30, pp.5-8
Draper,S.W., Brown, M.I., Henderson,F.P. & McAteer,E. (1996) "Integrative evaluation: an emerging role for classroom studies of CAL" Computers and Education vol.26 no.1-3, pp.17-32
GRUMPS (2001, May 30). The GRUMPS research project [WWW document]. URL http://grumps.dcs.gla.ac.uk/ (visited 2001 June 1)
Hake,R.R. (1998a) "Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses" Am.J.Physics vol.66 no.1 pp.64-74
Hake,R.R. (1998b) "Interactive-engagement methods in introductory mechanics courses" submitted to J.of Physics Education Research
Hoyet,H. (2000) "Graphing Interactivity in technology-based training" TechTrends vol.44 no.5 pp.26-31.
Landauer,T.K. (1995) The trouble with computers: Usefulness, usability, and productivity (MIT press; Cambridge, MA)
Laurillard, D. (1993) Rethinking university teaching: A framework for the effective use of educational technology (Routledge: London) p.103. A diagram of her model
Marton,F. (1981) "Phenomenography -- describing conceptions of the world around us" Instructional Science vol.10 pp.177-200.
Marton,F. & Booth,S. (1997) Learning and awareness (Mahwah, New Jersey: Lawrence Erlbaum Associates)
Massen, C., Poulis, J., Robens, E., Gilbert, M., (1998) "Physics lecturing with audience paced feedback" American Journal of Physics Vol.66.
Mayo,E. (1933) The human problems of an industrial civilization (New York: MacMillan) ch.3.
Sims, R. (1997) "Interactivity: A Forgotten Art?" Computers in Human Behavior vol.13 no.2 pp.157-180.
Web site logical path:
[www.psy.gla.ac.uk]
[~steve]
[EVSmain]
[this page]
[Top of this page]