Last changed
6 Aug 2003 ............... Length about 1,600 words (10,000 bytes).
This is a WWW document maintained by
Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/contingent.html.
Web site logical path:
[www.psy.gla.ac.uk]
[~steve]
[EVSmain]
[this page]
Question design:
[one question]
[purposes]
[contingent sessions]
[feedback]
[whole sessions]
(written by
Steve Draper,
as part of the Interactive Lectures website)
Besides the different purposes for questions (practising exam questions, collecting data for
a psychological study, launching discussion on topics without a right or
wrong answer), an independent issue is whether the session as a whole has a
fixed plan, or is designed to vary contingent (depending) on audience responses.
The obvious example of this is to use questions to discover any points where
understanding is lacking, and then to address those points. (While direct
self-assessment questions are the obvious choice for this diagnosis function,
in fact other question types can probably be used.) This is to act
contingently. By contingency I mean having the presenter NOT have a fixed
sequence of stuff to present, but a flexible branching plan, where which
branches actually get presented depends on how the audience answers questions
or otherwise shows their needs. There are degrees of this.
First are simple self-assessment questions, where little changes in the
session itself depending on how the audience answers, but the implicit hope
is that learners will (contingently i.e. depending on whether they got a
question right) later address the gaps in their knowledge which the questions
exposed, or that the teacher will address them later.
Secondly, we might present a case or problem with many questions in it; but the
sequence is fixed. A complete example of a problem being solved might be
prepared, with questions at each intermediate step, giving the audience
practice and self-assessment at each, and also showing the teacher where to
speed up and where to slow down in going over the method.
An example of this can be found in the box on p.74 of
Meltzer,D.E. & Manivannan,K. (1996) "Promoting interactivity in physics
lecture classes" The physics teacher vol.34 no.2 p.72-76.
It's a sample problem for a basic physics class at university, where a
simple problem is broken down into 10 MCQ steps.
Another way of looking at this is that of training on the parts of a skill
or piece of knowledge separately, then again on fitting them together into a
whole. Diagnostically, if a learner passes the test for the whole thing, we
can usually take it they know it all. But if not, then learning may be much
more effective if the pieces are learned separately before being put together.
Not only is there less to learn at a time, but more importantly feedback is
much clearer, less ambiguous if it is feedback on a single thing at a time.
When a question is answered wrongly by everyone, it may be a sign that too
much has been put together at once.
In terms of the lesson/lecture plan, though, there is a single fixed course of
events, although learners contribute answers at many steps, with the questions
being used to help all the learners converge on the right action at each step.
Thirdly, we could have a prepared case study (e.g. a case presented to
physicians), with a fixed start and end point; but where the audience votes on
what actions and tests to do next, and the presenter provides the information
the audience decided to ask for next. Thus the sequence of items depends (is
contingent) on the audience's responses to the questions; and the presenter
has to have created slides, perhaps with overlays, that allows them to jump and
branch in the way required, rather than trudging through a fixed sequence
regardless of the audience's responses.
Fourthly, a fully contingent session might be conducted, where the audience's
needs are diagnosed, and the time is spent on the topics shown to be needing
attention. The plan for such a session is no longer a straight line, but a
tree branching at each question posed.
The kinds of question you can use for this include:
- List the topics and ask the audience directly which one they want
addressed.
(You can either pick the most popular on the first vote; or else operate a
single transferable vote, by deleting the less popular half of the topics
after the first vote and re-voting.)
- Ask diagnostic "Self-assessment questions" until you find one which more
than a few get wrong
- Ask questions good for launching discussions, particularly "brain teasers"
If you want to take diagnosis from test questions seriously, you need to come
with a large set, selecting each one depending on the response to the last
one. A fuller scheme for designing such a bank might be:
- List the topics you want to cover.
- Multiply these by several levels of difficulty for each.
- Even within a given topic, and given level of difficulty, you can
vary the type of question: the type of link, the direction of link, the
specific case. [Link back]
When the audience's answers are in, the presenter must
a) state which answer (if any) was right, and b) decide what to do next:
- State the right answer and move quickly on
(e.g. if more than 90% got it right).
- Explain why the right answer is right etc.
- Don't at first state which is right, but tell the audience to discuss the
issue with their neighbours, and then vote again. (This is "peer-assisted
learning", as opposed to doing the discussion as a "plenary" i.e. the whole
class and presenter discussing it as one big group. This difference is the
subject of a big debate. Non-partisans will probably use sometimes one then
the other in their teaching.)
- Use the "50:50" technique. If a question produces a wide distribution of
answers, you can then rule out some of the options (which had attracted few
votes), giving the reason for this; then have a re-vote between the remaining
options. This is particularly appropriate where the question involved two or
more underlying issues, only one of which had split nearly everyone.
- If only 30% or less got it right (i.e. most of the audience seemed to
have it wrong) then discussion may not work as a remediation (because the
right ideas may not be there in the audience). You may have to tackle the
issue in smaller steps: see below. Bear in mind that with an MCQ with 4
alternative responses, 25% of the audience would get it right even if all
answered wholly randomly.
- To zero in on what they need to spend time on, increase the level of
difficulty until they start to get it wrong.
- To stay on the same level of difficulty,
vary the question type or format.
- To promote discussion, pick a question a little more difficult than most
can get right first time by themselves; ideally, a
"brain teaser".
- If almost all get a question wrong, back off and address the topic in
smaller steps.
While handset questions are MCQs, the real aim is (when required) to bring out
the reasons for and against each alternative answer. When it turns out that
most of the audience gets it wrong, how best to decompose the issue? My
suggestion is to generate a set of associated part questions.
One case is when a question links instances (only) to technical terms e.g.
(in psychology) "which of these would be the most reliable measure?"
If learners get this wrong, you won't know if that is because they don't
understand the issues, or this problem, or have just forgotten the special
technical meaning of "reliable". In other words, a question may require
understanding of both the problem case, and the concepts, and the special
technical vocabulary. If very few get it right, it could be unpacked by
asking about the vocabulary separately from the other issues e.g. "which of
these measures would give the greatest test-retest consistency?".
This is one aspect of the problem of technical
vocabulary.
Another case of this was about the top level problem decomposition in
introductory programming. The presenter had a set of problems (each of which
requiring a program to be designed) {P1, P2, P3}. He had a set of standard
top level structures {S1,S2, ... e.g. sequential, conditional, iteration} and
the problem the students "should" be able to do is to select the right
structure for each given problem. To justify/argue about this means to
generate a set of reasons for {F1,F2, ...} and against {A1,A2...} each
structure for each problem. I suggest having a bank of questions to select
from here. If there are 3 problems and 5 top level structures then 2*3*5=30
questions. An example of one of these 30 would be a set of alternative
reasons FOR using structure 3 (iteration) on problem 2, and the question asks
the audience which (subset) of these are good reasons.
The general notion is, that if a question turns out to go too far over the
audience's head, we could use these "lower" questions to structure the
discussion that is needed about reasons for each answer. (While if everyone
gets it right, you speed on without explanation. If half get it right, you go
for (audience) discussion because the reasons are there among the audience.
But if all get it wrong, support is needed; and these further questions could
keep the interaction going instead of crashing out into didactic
monologue.)
Web site logical path:
[www.psy.gla.ac.uk]
[~steve]
[EVSmain]
[this page]
[Top of this page]