Web site logical path: [www.psy.gla.ac.uk] [~steve] [ilig]

Compilation (for printing) of pages on EVS use

This compilation was assembled on 21 November 2024.

Last changed 15 Feb 2005 ............... Length about 800 words (7,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/il.html.

Interactive Lectures

(written by Steve Draper,   as part of the Interactive Lectures website)

A summary or introductory page on interactive lectures.

Contents (click to jump to a section)

Why make lectures interactive?

To improve the learning outcomes. [The positive way of putting it.]

Because there is no point in having lectures or class meetings UNLESS they are interactive. Lectures may have originated before printing, when reading a book to a class addressed what was then the bottleneck in learning and teaching: the number of available books. Nowadays, if one-way monologue transmission is what's needed, then books, emails, tapes will do that, and do it better because they are self-paced for the learner. [The negative way of putting it.]

What are interactive lectures?

Whenever it makes a difference that the learners are co-present with the teacher and each other. This might be because the learners act differently, or think differently; or because the teacher behaves differently.

In fact it is not enough to be different: it should be better than the alternatives. Learners are routinely much more interactive with the material when using books (or handouts) than they can be with lectures: they read at their own pace, re-read anything they can't understand, can see the spelling of peculiar names and terms, ask other students what a piece means, and carry on until they understand it rather than until a fixed time has passed. All of these ordinary interactive and active learning actions are impossible or strongly discouraged in lectures.

So for a lecture to be interactive in a worthwhile sense, what occurs must depend on the actions of the participants (not merely on a fixed agenda), and benefit learning in ways not achieved by, say, reading a comparable textbook.

Alternative techniques

One method is the one minute paper: have students write out the answer to a question for just one minute, and collect the answers for response by the teacher next time.

Another method is to use a voting system: put up a multiple choice question, have all the audience give an anonymous answer, and immediately display the aggregated results.

Another method is "Just in time teaching", where students are required both to read the material and to submit questions on it in advance, thus allowing the contact time to be spent on what they cannot learn for themselves.

In fact there are many methods.

Pedagogical rationale / benefits

In brief, there are three distinct classes of benefit that may be obtained by interactive techniques:

The general benefits, and specific pedagogic issues, are very similar regardless of the technique used. I have written about them in a number of different places including:


The key underlying issues, roughly glossed by the broad term "interactivity", probably are:


Last changed 31 Jan 2005 ............... Length about 500 words (5,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/handsetintro.html.

Using EVS for interactive lectures

(written by Steve Draper,   as part of the Interactive Lectures website)

This is a brief introduction to the technique of using EVS (electronic voting systems) for interaction in lectures. (A complementary technique is the one minute paper which uses open-ended audience input. An introduction to interactive lectures and why attempt them is here.)

The technique is much as in the "Ask the audience" lifeline in the TV show "Who wants to be a millionaire?". A multiple choice question (MCQ) is displayed with up to 10 alternative response options, the handsets (using infrared like domestic TV remote controls) distributed to each audience member as they arrive allow everyone to contribute their opinion anonymously, and after the specified time (e.g. 60 seconds) elapses the aggregated results are displayed as a barchart. Thus everybody sees the consensus or spread of opinion, knows how their own relates to that, and contributes while remaining anonymous. It is thus like a show of hands, but with privacy for individuals, more accurate and automatic counting, and more convenient for multiple-choice rather than yes/no questions.

It can be used for any purpose that MCQs can serve, including:


At Glasgow University we currently use the PRS equipment: small handheld transmitters for each audience member, some receivers connected to a laptop up front, itself connected to a data projector and running the PRS software. This equipment is portable, and there is enough for our largest lecture theatres (300 seats). Given advance organisation, setting up and packing up can be quick. We can accommodate those who normally use OHPs, powerpoint, ad hoc oral questions, or a mixture.

More practical details are offered here, and more details of how to design and use the questions are available through the main page, e.g. here.

Handset transmitter Fig.1 Infrared handset transmitter

 

 

 

 

 

 

  Handset transmitter
Fig.2 A receiver

Handset transmitter
Fig.3 The projected feedback during collection, showing handset ID numbers

Handset transmitter
Fig.4 Display of aggregated responses


Last changed 24 Feb 2005 ............... Length about 4,000 words (29,000 bytes).
(Document started on 15 Feb 2005.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/td.html. You may copy it. How to refer to it.

Transforming lectures to improve learning

By Steve Draper,   Department of Psychology,   University of Glasgow.

Contents (click to jump to a section)

Introduction

Some of the most successful uses of EVS (Electronic Voting Systems) have been associated with a major transformation of how "lectures" have been used within a HE (Higher Education) course. Here we adopt the approach of asking how in general we might make teaching in HE more effective, and keeping an open mind about whether and how ICT (Information and Communication Technology) could play a role in this. The aim then is to improve learning outcomes (in quantity and quality) while only investing about the same, or even fewer, teaching resources. More specifically, can we do this by transforming how lectures are used.

Replacing exposition

The explicit function of lectures is exposition: communicating new concepts and facts to learners. In fact lectures usually perform some additional functions, as their defenders are quick to point out and as we shall discuss below, but nevertheless in general most of the time is spent on exposition and conversely most exposition (in courses based on lectures) is performed by lectures. Clearly this could be done in other ways, such as requiring learners to read a textbook. On the face of it, this must be not only possible, but better. Remember, the best a speaker, whether face to face or on video, can possibly do in the light of individual differences between learners is to speak too fast for half the audience and too slowly for the other half. Reading is self-paced, and is therefore the right speed for the whole audience. Furthermore reading is in an important sense more interactive than listening: the reader can pause when they like, re-read whatever and whenever they like; pause to think and take notes at their own pace, before going on to try to understand what is said next -- which is likely to assume the audience has already understood what went before. So using another medium for the function of exposition should be better. Can this be made to work in actual undergraduate courses?

Yes. Here are several methods of replacing exposition and using the face to face large group "lecture" periods for something else.

It seems clear that lectures are not needed for exposition: the Open University (OU) has made this work for decades on a very big scale. Another recurring theme is the use of questions designed not for accurate scores (summative assessment), but to allow students to self-diagnose their understanding, and even more, to get them thinking. A further theme is to channel that thinking into discussion (whether with peers or teachers). This requires "interactivity" from staff: that is, being ready to produce discussion not to some plan, but at short notice in response to students' previous responses.

Should we expect to believe the reports of success with these methods, and should we expect them to generalise to many subjects and contexts? Again the answer is yes, which I'll arrive at by considering various types of theoretical analysis in turn.

The basic 3 reasons for any learning improvements

Many claims of novel learning success can be understood in terms of three very simple factors.

  1. The time spent by the learner actually learning: often called "time on task" by Americans. The effect of MacManaway's approach is to double the amount of time each learner spent (he studied how long they took reading his lecture scripts): first they read the scripts, then they attended the classes anyway. In fact they spent a little more than twice as long in total. Similarly JITT takes the same teacher time, but twice the student time.

  2. Processing the material in different ways. It probably isn't only total time, but (re)processing the concepts in more than one way e.g. not only listening and understanding, but then re-expressing in an essay. That is why so many courses require students not just to listen or read, but to write essays, solve written problems etc. However these methods are usually strongly constrained by the amount of staff time available to mark them. Here MacManaway got students to discuss the issues with each other, as do the IE and JITT schemes. Discussion requires producing reasons and parrying the conflicting opinions and reasons produced by others. Thinking about reasons and what evidence supports what conclusions is a different kind of mental processing than simply selecting or calculating the right answer or conclusion.

  3. Metacognition in the basic sense of monitoring one's degree of knowledge and recognising when you don't know or understand something. We are prone to feeling we understand something when we don't, and it isn't always easy to tell. The best established results on "metacognition" (Hunt, 1982; Resnick, 1989) show that monitoring one's own understanding effectively and substantially improves learning. Discussion with peers tests one's understanding and often leads to changing one's mind. The quizzes in the OU, JITT and the IE methods also perform this function, because eventually the teacher announces the right answer, and each student then knows whether they had got it right.
    Brain teaser questions also do this, partly because they frequently draw wrong answers and so force the learner to reassess their grasp of a concept, but for good learners the degree of uncertainty they create, even without the correct solution being announced, is alone enough to show them their grasp isn't as good as it should be.

The Laurillard model

The Laurillard (1993) model asserts that for satisfactory teaching and learning, 12 distinct activities must be covered somehow. Exposition is the first; and in considering its wider place, we are concerned with the first 4 activities: not only exposition by the teacher, but re-expression by the learner, and sufficient iteration between the two to achieve convergence of the learner's understanding with the teacher's conception.

Re-expression by learners (Laurillard activity 2) is achieved in peer discussion in the MacManaway and Interactive Engagement schemes, and by the quizzes in the OU and JITT schemes. Feedback on correctness (Laurillard activity 3) is provided by peer responses in the IE schemes and by the quiz in the JITT and IE schemes. Remediation more specifically targeted at student problems by the teacher (a fuller instantiation of Laurillard activity 3) is provided in the JITT scheme (because class time is given to questions sent in in advance), and often in the IE schemes in response to the voting results.

Thus in terms of the Laurillard model, instead of only covering activity 1 as a strictly expository lecture does, these schemes offer some substantial provision of activities 2,3 and 4 in quantities and frequency approaching that allocated to activity 1, while using only large group occasions and without extra staff time.

The management layer

I argue elsewhere that the Laurillard model needs to be augmented by a layer parallel to the one of strictly learning activities: one that describes how the decisions are made about what activities are performed. At least in HE, learning is not automatic but on the contrary, highly intentional and is managed by a whole series of decisions and agreements about what will be done. Students are continually deciding how much and what work to do, and learning outcomes depend on this more than on anything else. In many cases lectures are important in this role, and a major reason for students attending lectures is often to find out what the curriculum really is, and what they are required to do, and what they estimate they really need to do. One reason that simply telling students to read the textbook and come back for the exam often doesn't work well is that, while it covers the function of exposition, it neglects this learning management aspect. Lectures are very widely used to cover it, with many class announcements being made in lectures, and the majority of student questions often being about administrative issues such as deadlines.

The schemes discussed here (apart from the OU) do not neglect this aspect, so again we can expect them to succeed on these grounds. They do not abolish classes, so management and administrative functions can be covered there as before. In fact the quizzes and to some extent the peer discussion offer better information than either standard lectures, a textbook or lecture script about how a student is doing both in relation to the teacher's expectations and to the rest of the class. They also do this not just absolutely (do you understand X which you need to know before the exam) but in terms of the timeline (you should have understood this by today).

In addition to this, these schemes also give much superior feedback to the teacher about how the whole course is going for this particular class of students. This equally is part of the management layer. However standard lectures are never very good for this. While a new, nervous, or uncaring lecturer may pick up nothing about a classes' understanding, even a highly skilled one has difficulty since at best the only information is a few facial expressions and how the self-selected one student answers each question from the lecturer. In contrast most of the above methods get feedback from every student, and formative feedback for the teacher is crucial to good teaching and learning. What I have found in interviewing adopters of EVS is that while many introduced it in order to increase student engagement, the heaviest users now most value the way it keeps them in much better touch with each particular class than they ever had without it.

This formative feedback to teachers is important for debugging an exposition they have authored, but is also important for adapting the course for each class, dwelling on the points that this particlar set find difficult.

Other functions of lectures

Arguments attacking the use of lectures have been made before (Laurillard, 1993). Those seeking to defend them generally stress the other functions than simple exposition that they may perform. One of these is learning management, as discussed in the previous section. Some others are:

Conclusion

We began by considering some schemes for replacing the main function of lectures -- exposition -- and then used various pieces of theory to discuss whether the proposed schemes would be likely to be successful at replacing all the functions of a lecture. Overall, while providing exposition in other media alone might be worse than lectures because of neglecting other functions, the proposed schemes should be better because they address all the identified functions and address some important ones better than standard lectures do.

Thus we can replace some or all exposition in lectures. Furthermore, we can re-purpose these large group meetings to cover other learning activities significantly better than usual. We can feel some confidence in this by a careful analysis of the functions covered by traditional lectures, and the ones thought important in general, and show how these are each covered in proposed new teaching schemes. This in turn leads to two further issues to address.

Firstly: which functions can in fact be effectively covered in large group teaching with the economies of scale that allows, and which others must be covered in other ways? Besides exposition, and the way the schemes above address Laurillard's activities 1 to 4, other functions that can be addressed in large groups in lecture theatres include:

Secondly, some aspects of a course can use large group teaching (see above), but all the rest must be done in smaller groups. How small, and how to organise them? One of the most interesting functions to notice is that many of the schemes above use peer discussion, coordinated by the teacher but otherwise not supervised or facilitated by staff. For this the effective size is no more than 5 learners, and 2 or 4 may often be best. Both our experience and published research on group dynamics and conversation structures support this. Instead of clinging to group sizes dictated either by current resources or by what staff are used to (which often leads to "tutorial" group sizes of 6, 10, or 20), we should consider what is effective. When the learning benefit is in the student generating an utterance, then 2 is the best size, since then at any given moment half the students are generating utterances. Where spontaneous and flowing group interaction is required, then 5 is the maximum number. For creating and coordinating a community, then it can be as large as you like provided an appropriate method is used e.g. using EVS to show everyone the degree of agreement and diversity on a question, or having the lecturer summarise written responses submitted earlier.

However forming groups simply by dividing the number of students by the number of staff is a foolish administrative response, not a pedagogic one. What is the point of groups of 10 or 20? Not much. If the model is for a series of short one to one interactions (which may be relevant for pastoral and counselling functions), then consider how to organise this. Putting a group of students in the same room is obviously inappropriate for this, and ICT makes this less and less necessary. If the model is for more personalised topics e.g. all the students with trouble over subtopic X go to one group, then we need NOT to assign permanent groups, but should organise ad hoc ones based on that subtopic. In general, what the schemes above suggest for the future is to consider a course as involving groups of all sizes, not necessarily permanent, not necessarily supervised; and organised in a variety of ways, including possibly pyramids and unsupervised groups. This is after all only an extension of the eternal expectation that learners will do some work alone: the ultimate small unsupervised group.

In the end, we should consider:

References

Draper, S.W. (1997) Adding (negotiated) learning management to models of teaching and learning http://www.psy.gla.ac.uk/~steve/TLP.management.html (visited 24 Feb 2005)

Dufresne, R.J., Gerace, W.J., Leonard, W.J., Mestre, J.P., & Wenk, L. (1996) Classtalk: A Classroom Communication System for Active Learning Journal of Computing in Higher Education vol.7 pp.3-47 http://umperg.physics.umass.edu/projects/ASKIT/classtalkPaper

Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand student survey of mechanics data for introductory physics courses. American Journal of Physics, 66, 64-74.

R.R. Hake (1991) "My Conversion To The Arons-Advocated Method Of Science Education" Teaching Education vol.3 no.2 pp.109-111 online pdf copy

Hunt, D. (1982) "Effects of human self-assessment responding on learning" Journal of Applied Psychology vol.67 pp.75-82.

Laurillard, D. (1993), Rethinking university teaching (London: Routledge)

MacManaway,M.A. (1968) "Using lecture scripts" Universities Quarterly vol.22 no.June pp.327-336

MacManaway,M.A. (1970) "Teaching methods in HE -- innovation and research" Universities Quarterly vol.24 no.3 pp.321-329

Mazur, E. (1997). Peer Instruction: A User’s Manual. Upper Saddle River, NJ:Prentice-Hall.

Meltzer,D.E. & Manivannan,K. (1996) "Promoting interactivity in physics lecture classes" The physics teacher vol.34 no.2 p.72-76 especially p.74

Novak,G.M., Gavrin,A.D., Christian,W. & Patterson,E.T. (1999) Just-in-time teaching: Blending Active Learning and Web Technology (Upper Saddle River, NJ: Prentice- Hall)

Novak,G.M., Gavrin,A.D., Christian,W. & Patterson,E.T. (1999) http://www.jitt.org/ Just in Time Teaching (visited 20 Feb 2005)

Resnick,L.B. (1989) "Introduction" ch.1 pp.1-24 in L.B.Resnick (Ed.) Knowing, learning and instruction: Essays in honor of Robert Glaser (Hillsdale, NJ: Lawrence Erlbaum Associates).


Last changed 15 Oct 2009 ............... Length about 1700 words (13,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/local.html.

Using EVS at Glasgow University c.2005

(written by Steve Draper,   as part of the Interactive Lectures website)

This page is about the use of EVS (electronic voting systems) in lectures at Glasgow University. It was written a few years ago, and assumes the use of the old IR equipment; though most of the rest of the advice is still reasonable. More up to date advice about use of the current equipment here.

Questions and answers (click to jump to a section)

Brief introduction

If you haven't already read a passage explaining what these EVS are about, a brief general account is here.

To date, student response, and lecturers' perceptions of that, have been almost entirely favourable in an expanding range of trials here at the University of Glasgow (to say nothing of those elsewhere) already involving students in levels 1,2,3 and 4, and diverse subjects (psychology, medicine, philosophy, computer science, ...), and in sequences from one-off to every lecture for a term.

The equipment is mobile, and so can be used anywhere with a few minutes setup. It additionally requires a PC (laptops are also mobile, and we can supply one if necessary), and a data projector (the machine for projecting a computer's displayed output on to a big screen).

In principle, the equipment is available for anyone at the university to use, and there is enough for the two largest lecture theatres to be using it simultaneously. In practice, the human and equipment resources are not unlimited, and advance arrangements are necessary. We can accommodate any size audience, but there is a slight chance of too many bookings coinciding for the equipment, and a considerable chance of us not having enough experienced student assistants available at the right time: that is the currently scarcest resource.

Why would you want to use EVS in your lectures?

Want to see them in action?

Find out who is using them, and go and see them in use.

If it's one of mine you needn't ask, just turn up; and probably other users feel the same. We are none of us expert, yet we all seem to be getting good effects and needn't feel defensive about it. It usually isn't practicable to get 200 students to provide an audience for a realistic demonstration: so seeing a real use is the best option.

What's involved at the moment of use?

What's involved at the lecture?

Ideally (!):

One way of introducing a new audience to the EVS is described here.

What preparation is required by the lecturer?

Equipment?

There are several alternative modes you could use this in.

Human resources

It is MUCH less stressful for a lecturer, no matter how practised at this, if there are assistants to fetch and set up the equipment, leaving the lecturer to supervise the occasion. We have a small amount of resource for providing these assistants.

What has experience shown can go wrong?

Generally both the basic PRS equipment, and the PRS software itself have proved very reliable, both here and elsewhere. Other things however can go wrong.

Unnecessary technical details

Most lecturers never need to know about further technical details. But if you want to know about them, about the log files PRS creates, etc.etc. then read on here.

[ Long past bookings   Past workshops for prospective users     (Past uses)     Interim evaluation report ]


Last changed 25 Jan 2003 ............... Length about 300 words (3,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/question.html.

Presenting a question

(written by Steve Draper,   as part of the Interactive Lectures website)

What is involved in presenting each question?

How to present a question

  • Display the question (but don't start the PRS handset software)
  • Explain it as necessary
  • "Are you ready to answer it? Anything wrong with this question?" and encourage any questions, discussion of the question.
  • Only then, press <start> on the computer system.
  • Audience answers: wait until the total of votes reach the full audience total.
  • Display answers (as a bar graph).
  • Always try to make at least one oral comment about the distribution of answers shown on the graph. Partly for "closure"/acknowledgement; partly to slow you up and let everyone see the results.
  • State which answer (if any) was right, and decide what to do next.

    What the presenter does in essence

    The presenter's function is, where and when possible, to:

    What each learner does in essence

    For each question, each learner has to:



    Last changed 6 June 2004 ............... Length about 300 words (2500 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/length.html.

    Length and number of questions

    (written by Steve Draper,   as part of the Interactive Lectures website)

    How many questions? How long do they take?
    A rule of thumb for a 50 minute lecture is to use only 3 EVS questions.

    In a "tutorial" session organised entirely around questions, you could at most use about 12 if there were no discussion: 60 secs to express a question, 90 secs to collect votes, 90 secs to comment briefly on the responses gives 4 minutes per question if there is no discussion or detailed explanation, and so 12 questions in a lecture.

    Allowing 5 mins (still very short) for discussion by audience and presenter of issues that are not well understood would mean only 5 such questions in a session.

    It is also possible, especially with a large collection of questions ready, to "use up" some by just asking someone to shout out the answer to warm up the audience, and then vote on a few to make sure the whole audience is keeping up with the noisy few. It would only take 20 seconds rather than 4 minutes for each such informal use of a question. Never let the EVS become too central or important: it is only one aid among others.

    Thus for various reasons you may want to prepare a large number of questions from which you select only a few, depending on how the session unfolds.


    Last changed 13 April 2022 ............... Length about 1500 words (12,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/qdesign.html.

    Question formats

    (written by Steve Draper,   as part of the Interactive Lectures website)

    There is a whole art to designing MCQs (multiple choice questions). Much of the literature on this is for assessment. In this context however we don't much care (as that literature does) about fairness, or discriminatory power, but instead will concentrate on what will maximise learning.

    Here I just discuss possible formats for a question, without varying the purpose or difficulty. I was in part inspired by Michele Dickson of Strathclyde University. The useful tactic implied by her practice is to vary the way questions are asked about each topic.

    A common type of MCQ concerns one relationship e.g. (using school chemistry as an example domain) "What is the chemical symbol for gold: Ag, Al, Au, Ar ?"

    Reversing the relationship

    You can equally, and additionally, ask about the same relationship in reverse: "Which metal is represented by the symbol 'Au'? Gold, silver, platinum, copper?"

    Multiple types of relationship

    When you have several relationships, the alternative question types multiply. Consider these 3 linked pieces of information: a photo of a gold nugget or ring; the word (name) "Gold"; and the symbol "Au". These 3 pieces of information each have a relationship with the other 2, giving 3 types of relationship; and each has 2 directions, giving 6 question types in all:

    Applied to statistics this might be:

    The idea is to require students to access knowledge of a topic from several different starting points. Here I exercised three kinds of link, and each kind in both directions. Exercising these different types and directions of link is not only important in itself (because understanding requires understanding all of these) but keeps the type of mental demand on the students fresh, even if you are in fact sticking on one topic.

    Types of relationship to exercise / test

    In the abstract there are three different classes of relationship to test:

    The first is that of linking ideas or concepts to particular examples or instances of them e.g. is a whale a fish or a mammal? Another form of this is linking (engineering or maths) problems with the principle or rule that is likely to be used to solve it. However both concepts and instances are represented in more than one way, and practice at these alternative representations and their equivalences is usually an essential aspect of learning a subject. Thus concepts usually have both a technical name, and a definition or description, and testing this relationship is important. Similarly instances usually have more than one standard method of description and, although these are specific to each subject, learners need to master them all, and questions testing these equivalences are important. In teaching French language, both the spelling, the pronounciation, and the meaning of a word need to be learned. In statistics, an example data set should be represented by a graph, a table of values, as well as a description such as "bell shaped curve with long tails". In chemistry, the name "copper sulfate" should be linked to "CuSO4" and a photograph of blue crystals, and questions should test these links. (See Johnstone, A.H. (1991) "Why is science difficult to learn? Things are seldom what they seem" Journal of computer assisted learning vol.7 no.2 pp.75-83 for an argument related to this based in teaching Chemistry. See also Roy Tasker's group: http://visualizingchemistry.com/research.)

    These relationships are all bidirectional, so questions can (and should) be asked in both directions e.g. both "which of these is a mammal" and "to which of these categories do dolphins belong?". Thus a subject with three standard representations for instances plus concept names and concept definitions will have five representations, and so 20 types of question (pick one of five for the question, and one of the remaining four for the response categories). Additional variations come from allowing more than one item as an answer, or asking the question in the negative e.g. "which of these is not a mammal?: mouse, platypus, porpoise?".

    The problem of technical vocabulary is a general one, and suggests that the concept name-definition link should be treated especially carefully. If you ask questions that are problems (real-world cases) and ask which concept applies but use only the technical names of the concepts, then students must understand perfectly both concept and the vocabulary; and if they get it wrong you don't know which aspect they got wrong. Asking concept-case questions using not technical vocabulary but paraphrased descriptions of the concepts can separate these; and separate questions to test name-definition (i.e. concept vocabulary).

    Further Response Options

    The handsets do not directly allow the audience to specify more than one answer per question. However you can offer at least some combinations yourself e.g.
    "Is a Black Widow:
    1. A spider
    2. An insect
    3. An arachnid
    4. (1) and (2)
    5. (2) and (3)
    6. (1) and (3)
    7. (1) and (2) and (3)
    8. None of the above

    It may or may not be a good idea to include null responses as an option. Against offering them is the idea that you want to force students to commit to an answer rather than do nothing, and also the observation that when provided usually few take the null option, given the anonymity of entering a guess. Furthermore, a respondent could simply not press any button; although that, for the presenter, is ambiguous between a decision rejecting all the alternatives, the equipment giving trouble to some of the audience, or the audience getting bored or disengaged. However if you do include them as standard, it may give you better, quicker feedback about problems. In fact there are at least three usually applicable distinct null options to use:

    Assertion-reason questions

    I particularly commend asking MCQs that, instead of asking which fact is true, ask which reason for a given fact is the right one.

    An extension of this are: Assertion-reason questions.

    Covertly related questions: Using 3 questions to make a strong test of understanding one concept

    Mark Russell suggests using 3 (say) alternative questions all testing the same key concept. With MCQs with 4 response options, 25% of students will get a question right by accident if they answer at random: not a strong test. He suggests having 3 alternative questions testing exactly the same concept, and only students who get all 3 of these correct should be regarded as having learned the concept. The questions are tacitly linked (by being about the same concept), but not listed adjacently and not using similar structure. He found that students who did not have a sound understanding of the concept did not even recognise that the 3 questions were linked: the disguise does not need to be elaborate (contrary to expert / staff perceptions, who naturally see the 3 questions as "about the same thing" exactly because they grasp the concept).

    Russell, Mark (2008) "Using an electronic voting system to enhance learning and teaching" Engineering Education vol.3 no.2 pp.58-65 doi:10.11120/ened.2008.03020058

    Some references on MCQ design

  • CAAC (Computer Assisted Assessment Centre) website advice on MCQ design

  • Johnstone, A. H. (1991) "Why is science difficult to learn? Things are seldom what they seem" Journal of Computer Assisted Learning vol.7 no.2 pp.75-83 doi:10.1111/j.1365-2729.1991.tb00230.x
  • See also Roy Tasker's group: http://visualizingchemistry.com/research

  • McBeath, R. J. (ed.) (1992) Instructing and Evaluating Higher Education: A Guidebook for Planning Learning Outcomes (New Jersey: ETP)

  • Russell, Mark (2008) "Using an electronic voting system to enhance learning and teaching" Engineering Education vol.3 no.2 pp.58-65 doi:10.11120/ened.2008.03020058


    Last changed 13 April 2022 ............... Length about 4,000 words (29,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/qpurpose.html.

    Pedagogical formats for using questions and voting

    (written by Steve Draper,   as part of the Interactive Lectures website)

    EVS questions may be used for many pedagogic purposes. These can be classified in an abstract way: discussed at length elsewhere and summarised here:

    1. Assessment
      • Confidence (or certainty) based marking (CBM) for summative assessment. While the rest of the purposes addessed on this page are about using handsets in a large classroom, CBM is for summative assessment and solo study online. Gardner-Medwin developed this well and a lot of his work is still on the web. Bear in mind three things:
        1. He taught medical students: very bright, and very motivated to maximise marks. But also (a2) with two drives: to sound completely certain to patients; but also very aware that it is dangerous to bet a patient's life on a decision, so how certain you are really matters professionally. (Programmers mostly don't care about their users.)
        2. He made them practise on this format for tests before doing tests that counted, so they could get used to it.
        3. The bit of CBM which is not quite obvious is the exact marking scheme. It is here, among other places: https://tmedwin.net/cbm/

        • Issroff K. & Gardner-Medwin A.R. (1998) "Evaluation of confidence assessment within optional coursework" In : Oliver, M. (Ed.) Innovation in the Evaluation of Learning Technology, University of North London: London, pp 169-179
        • Gardner-Medwin, A. R. (2006). "Confidence-based marking: towards deeper learning and better exams" In C. Bryan & K. Clegg (Eds), Innovative assessment in higher education. London: Routledge
        • His web site: https://tmedwin.net/cbm/
        • His papers
        • My website on question design

          In theory, I might bet that using CBM would work as well for (deep) learning INSTEAD of Mazur's PI. I believe both work the same way in learners: forcing them to think about whether they are sure of their answer, and then self-correcting by thinking up reasons for and against it. See:
          Draper,S.W. (2009a) "Catalytic assessment: understanding how MCQs and EVS can foster deep learning" British Journal of Educational Technology vol.40 no.2 pp.285-293 doi: 10.1111/j.1467-8535.2008.00920.x

      • Diagnostic SAQs i.e. "self-assessment questions" (formative assessment). These give individual formative feedback to students, but also both teacher and learners can see what areas need more attention. The design of sets of these is discussed further on a separate page, including working through an extended example (e.g. of how to solve a problem) with a question at each step. SAQs are a good first step in introducing voting systems to otherwise unmodified lectures.

    2. Initiate a discussion. Discussed further below.
    3. Formative feedback to the teacher i.e. "course feedback".
      1. In fact you will get it anyway without planning to. For instance SAQs will also tell you how well the class understands things.
      2. To organise a session explicitly around this, look at contingent teaching;
      3. To think more directly about how questioning students can help teachers and promote learning directly, look at this book on "active assessment": Naylor,S., Keogh,B., & Goldsworthy,A. (2004) Active assessment: Thinking, learning, and assessment in science (London: David Fulton Publishers)
      4. The above are about feedback to the teacher of learners' grasp of content. You can also ask about other issues concerning the students' views of the course as in course feedback questionnaires (which could be administered by EVS).
      5. Combining that with the one minute paper technique would give you some simple open-ended feedback to combine with the "numbers" from the EVS voting.
      6. A more sophisticated (but time consuming) version of this would combine collecting issues from the students, and then asking EVA survey questions about each such issue. This is a form of of having students design questions where this is described further.
    4. Summative assessment (even if only as practice) e.g. practice exam questions.
    5. Peer assessment could be done on the spot, saving the teacher administrative time and giving the learner much more rapid, though public, feedback.
    6. Community mutual awareness building. At the start of any group e.g. a research symposium or the first meeting of a new class, the equipment gives a convenient way to create some mutual awareness of the group as a whole by displaying personal questions and having the distribution of responses displayed.
    7. Experiments using human responses: for topics that concern human responses, a very considerable range of experiments can be directly demonstrated using the audience as participants. The great advantage of this is that every audience member both experiences what it is to be a "subject" in the experiment, and sees how variable (or not) the range of responses is (and how their own compares to the average). In a textbook or conventional lecture, neither can be done experientially and personally, only described. Subjects this can apply in include:
      • Politics (demonstrate / trial voting systems)
      • Psychology (any questionnaire can be administered then shared)
      • Physiology (Take one's pulse: see class' average; auditory illusions)
      • Vision science (display visual illusions; how many "see" it?)
      • Maths/statistics/physics: Illustrate Benford's law by collecting data on the first digit of almost anything (train ticket serial number, house address, ...)
    8. Having students design questions: this is relatively little used, but has all the promise of a powerfully mathemagenic tactic. Just as peer discussion moves learners from just picking an answer (perhaps by guessing) to arguing about reasons for answers, so designing MCQs gets them thinking much more deeply about the subject matter.

    However pedagogic uses are probably labelled rather differently by practising lecturers, under phrases like "adding a quiz", "revision lectures", "tutorial sessions", "establishing pre-requisites at the start", "launching a class discussion". This kind of category is more apparent in the following sections and groupings of ways to use EVS.

    SAQs and creating feedback for both learner and teacher

    Asking test questions, or "self-assessment questions" (SAQs) since only the student knows what answer they gave individually, is useful in more than one way.

    A first cautious use of EVS

    The simplest way to introduce some EVS use into otherwise conventional lectures is to add some SAQs at the end so students can check if they have understood the material. This is simplest for the presenter: just add two or three simple questions near the end without otherwise changing the lecture plan. Students who get them wrong now know what they need to work on. If the average performance is worse than the lecturer likes, she or he can address this at the start of the next lecture. Even doing this in a simple, uninspired way has in fact consistently been viewed positively by students in our developing experience, as they welcome being able to check their understanding.

    Extending this use: Emotional connotations of questions

    If you put up an exam question, its importance and relevance is clear to everyone and leads to serious treatment. However, it may reduce discussion even while increasing attention, since to get it wrong is to "fail" in the terms of the course. Asking brain teasers is a way of exercising the same knowledge, but without the threatening overtones, and so may be more effective for purposes such as encouraging discussion.

    Putting up arguments or descriptions for criticism may be motivating as well as useful (e.g. describe a proposed experiment and ask what is faulty about it). It allows students to practise criticism which is useful; and criticism is easier than constructive proposals which, in effect, is what they are exclusively asked for in most "problem solving" questions, and so questions asking for critiques may be a better starting point.

    Thus in extending beyond a few SAQs, presenters may like to vary their question types with a view to encouraging a better atmosphere and more light hearted interaction.

    Contingent teaching: Extending the role of questions in a session

    Test questions can soon lead to trying a more contingent approach, where a session plan is no longer for a fixed lecture sequence of material, but is prepared to vary depending upon audience response. This may mean preparing a large set of questions, those actually used depending upon the audience: this is discussed in "designing a set of questions for a contingent session".

    This approach could be used, for instance, in:


    Designing for discussion

    Another important purpose for questions is to promote discussion, especially peer discussion. A general format might be: pose a question and take an initial vote (this gets each person to commit privately to a definite initial position, and shows everyone what the spread of opinion on it is). Then, without expressing an opinion or revealing what the right answer if any is, tell the audience to discuss it. Finally, you might take a new vote, and see if opinions have shifted.

    The general benefit is that peer discussion requires not just deciding on an answer or position (which voting requires) but also generating reasons for and against the alternatives, and also perhaps dealing with reasons and objections and opinions voiced by others. That is, although the MCQ posed only directly asks for an answer, discussion implicitly requires reasons and reasoning, and this is the real pedagogical aim. Furthermore, if the discussion is done in small groups of, say, four, then at any moment one in four not only one in the whole room is engaged in such generation activity.

    There are two classes of question for this: those that really do have a right answer, and those that really don't. (Or, to use Willie Dunn's phrase, those that concern objects of mastery and those that are a focus for speculation.) In the former case, the question may be a "brain teaser" i.e. optimised to provoke uncertainty and dispute (see below). In the latter case, the issue to be discussed simply has to be posed as if it had a fixed answer, even though it is generally agreed it does not: for instance as in the classic debate format ("This house believes that women are dangerous."). Do not assume that a given discipline necessarily only uses one or the other kind of question. GPs (doctors), for instance, according to Willie Dunn in a personal note, "came to distinguish between topics which were a focus for speculation and those which were an object of mastery. In the latter the GPs were interested in what the expert had to say because he was the master, but with the other topics there was no scientifically-determined correct answer and GPs were interested in what their peers had to say as much as the opinion of the expert, and such systems [i.e. like PRS] allowed us to do this."

    Slight differences in format for discussion sessions have been studied: Nicol, D. J. & Boyle, J. T. (2003) "Peer Instruction versus Class-wide Discussion in large classes: a comparison of two interaction methods in the wired classroom" Studies in Higher Education. In practice, most presenters might use a mixture and other variations. The main variables are in the number of (re)votes, and the choice or mixture of individual thought, small group peer discussion, and plenary or whole-class discussion. While small group discussion may maximise student cognitive activity and so learning, plenary discussion gives better (perhaps vital) feedback to the teacher by revealing reasons entertained by various learners, and so may maximise teacher adaptation to the audience. The two leading alternatives are summarised in this table (adapted from Nicol & Boyle, 2003).

    Discussion recipes
    "Peer Instruction":
    Mazur Sequence
    "Class-wide Discussion":
    Dufresne (PERG) Sequence
    1. Concept question posed.
    2. Individual Thinking: students given time to think individually (1-2 minutes).
    3. [voting] Students provide individual responses.
    4. Students receive feedback -- poll of responses presented as histogram display.
    5. Small group Discussion: students instructed to convince their neighbours that they have the right answer.
    6. Retesting of same concept.
      [voting] Students provide individual responses (revised answer).
    7. Students receive feedback -- poll of responses presented as histogram display.
    8. Lecturer summarises and explains "correct" response.
    1. Concept question posed.
    2. Small group discussion: small groups discuss the concept question (3-5 mins).
    3. [voting] Students provide individual or group responses.
    4. Students receive feedback -- poll of responses presented as histogram display.
    5. Class-wide discussion: students explain their answers and listen to the explanations of others (facilitated by tutor).
    6. Lecturer summarises and explains "correct" response.

    Questions to discuss, not resolve

    Examples of questions to launch discussion in topics that don't have clear right and wrong answers are familiar from debates and exam questions. The point, remember, is to use a question as an occasion first to remind the group there really are differences of view on it, but mainly to exercise giving and evaluating reasons for and against. The MCQ, like a debate, is simply a conventional provocation for this.

    "Brain teasers"

    Using questions with right and wrong answers to launch discussion is, in practice, less showing a different kind of question to the audience and more a different emphasis in the presenter's purpose. Both look like (and are) tests of knowledge; in both cases if (but only if) the audience is fairly split in their responses then it is a good idea to ask them to discuss the question with their neighbours and then re-voting, rather than telling them the right answer; in both cases the session will become more contingent: what happens will depend partly on how the discussion goes not just on the presenter's prepared plan; in both cases the presenter may need to bring a larger set of questions than can be used, and proceed until one turns out to produce the right level of divisiveness in initial responses.

    The difference is only that in the SAQ case the presenter may be focussing on finding weak spots and achieving remediation up to a basic standard whether the discussion is done by the presenter or class as a whole, while in the discussion case, the focus may be on the way that peer discussion is engaging and brings benefits in better understanding and more solid retention regardless of whether understanding was already adequate.

    Nevertheless optimising a question for diagnosing what the learners know (self-assessment questions), and optimising it for fooling a large proportion and for initiating discussion are not quite the same thing. There are benefits from initiating discussion independently of whether this is the most urgent topic for the class (e.g. promoting the practice of peer interaction, generating arguments for an answer probably improves the learner's grasp even if they had selected the right answer, and is more related to deep learning, and promotes their learning of reasons as well as of answers, etc.).

    Some questions seem interesting but hard to get right if you haven't seen that particular question before. Designing a really good brain teaser is not just about a good question, but about creating distractors i.e. wrong but very tempting answers. In fact, they are really paradoxes: where there seem to be excellent reasons for each contradictory alternative. Such questions are ideal for starting discussions, but perhaps less than optimal for simply being a fair diagnosis of knowledge. In fact ideally, the alternative answers should be created to match common learner misconceptions for the topic. An idea is to use the method of phenomenography to collect these misconceptions: the idea here would be to then express the findings as alternative responses to an MCQ.

    Great brain teasers are very hard to design, but may be collected or borrowed, or generated by research.

    Here's an example that enraged me in primary school, but which you can probably "see through".

    "If a bottle of beer and a glass cost one pound fifty, and the beer costs a pound more than the glass, how much does the glass cost?"
    The trap seems to lie in matching the beer to one pound, the glass to fifty pence, and being satisfied that a "more" relation holds.

    Here is one from Papert's Mindstorms p.131 ch.5.

    "A monkey and a rock are attached to opposite ends of a rope that is hung over a pulley. The monkey and the rock are of equal weight and balance one another. The monkey begins to climb the rope. What happens to the rock?"
    His analysis of why this is hard (but not complex) is: students don't have the category of "laws-of-motion problem" like conservation of energy problem. I.e. we have mostly learned Newton without having really learned the pre-requisite concept of what IS a law of motion. Another view is that it requires you to think of Newtons 3rd law (reaction), and most people can repeat the law without having exercised it much.

    Another example on the topic of Newtonian mechanics can be paraphrased as follows.

    Remember the old logo or advert for Levi's jeans that showed a pair of jeans being pulled apart by two teams of mules pulling in opposite directions. If one of the mule teams was sent away, and their leg of the jeans tied to a big tree instead, would the force (tension) in the jeans be: half, the same, or twice what it was with two mule teams?
    The trouble here is how can two mule teams produce no more force than one team, when one team clearly produces more than no teams; on the other hand, one mule pulling one leg (while the other is tied to the tree) clearly produces force, so a second mule team isn't necessary.

    Another one (taken from the book "The Tipping Point") can be expressed:

    Take a large piece of paper, fold it over, then do that again and again a total of 50 times. How tall do you think the final stack is going to be?
    Somehow even those who have been taught better, tend think it will be about 50 times the thickness of a piece of paper, whereas really it is doubled 50 times i.e. it will be 2 to the 50th power thicknesses, which is a huge number; and probably comes out as about the distance from here to the sun.

    Brain teasers seem to relate the teaching to students' prior conceptions, since tempting answers are most often those suggested by earlier but incorrect or incomplete ways of thinking.

    Whereas with most questions it is enough to give (eventually) the right answer and explain why it is right, with a good brain teaser it may be important in addition to explain why exactly each tempting wrong answer is wrong. This extra requirement on the feedback a presenter should produce is discussed further here.

    Finally, here is an example of a failed brain teaser. "Isn't it amazing that our legs are exactly the right length to reach the ground?" (This is analogous to some specious arguments that have appeared in cosomology / evolution.) At the meta-level, the brain teaser or puzzle here is to analyse why that is tempting to anyone; something to do with starting the analysis from your seat of consciousness in your head (several feet above the ground) and then noticing what a good fit from this egocentric viewpoint your legs make between this viewpoint and the ground.

    May need a link here on to the page seq.html about designing sequences with/of questions. And on from there to lecture.html.

    Extending discussion beyond the lecture theatre

    An idea which Quintin is committed to trying out (again, better) from Sept. 2004 is extending discussion, using the web, beyond the classroom. The pedagogical and technical idea is to create software to make it easy for a presenter to ship a question (for instance the last one used in a lecture, but it could be all of them), perhaps complete with initial voting pattern, to the web where the class may continue the discussion with both text discussion and voting. Just before the next lecture, the presenter may equally freeze the discussion there and export it (the question, new voting pattern, perhaps discussion text) back into powerpoint for presentation in the first part of their next lecture.

    If this can be made to work pedagogically, socially, and technically then it would be a unique exploitation of e-learning with the advantages of face to face campus teaching; and would be expected to enhance learning because so much is simply proportional to the time spent by the learner thinking: so any minutes spent on real discussion outside class is a step in the right direction.

    Direct tests of reasons

    One of the main reasons that discussion leads to learning, is that it gets learners to produce reasons for a belief or prediction (or answer to a question), and requires judgements about which reasons to accept and which to reject. This can also be done directly by questions about reasons.

    Simply give the prediction in the question, and ask which of the offered reasons are the right or best one(s); or which of the offered bits of evidence actually support or disconfirm the prediction.

    Collecting experimental data

    A voting system can obviously be used to collect survey data from an audience. Besides being useful in evaluating the equipment itself, or the course in which it is used (course feedback), this is particularly useful when that data is itself the subject of the course as it may be in psychology, physiology, parts of medical teaching, etc.

    For instance, in teaching the part of perception dealing with visual illusions, the presenter could put up the illusion together with a question about how it is seen, and the audience will then see the proportion of the audience that "saw" the illusory percept, and compare what they are told, their own personal perceptual experience, and the spread of responses in the audience.

    In a practical module in psychology supported by lectures, Paddy O'Donnell and I have had the class design and pilot questionnaire items (questions) in small groups on a topic such as the introduction and use of mobile phones, for which the class is itself a suitable population. Each group then submited their items to us, and we then picked a set drawing on many people's contributions to form a larger questionnaire. We then used a session to administer that questionnaire to the class, with them responding using the voting equipment. But the end of that session we had responses from a class of about 100 to a sizeable questionnaire. We could then make that data set available almost immediately to the class, and have them analyse the data and write a report.

    A final year research project has also been run, using this as the data collection mechanism: it allowed a large number of subjects to be "run" simultaneously, which is the advantage for the researcher.

    In a class on the public communication of science, Steve Brindley has surveyed the class on some aspects of the demonstrations and materials he used, since they are a themselves a relevant target for such communciation and their preferences for different modes (e.g. active vs. passive presentations) are indicative of the subject of the course: what methods of presentation of science are effective, and how do people vary in their preferences. He would then begin the next lecture by re-presenting and commenting on the data collected last time.


    Last changed 6 Aug 2003 ............... Length about 1,600 words (10,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/contingent.html.

    Degrees of contingency

    (written by Steve Draper,   as part of the Interactive Lectures website)

    Besides the different purposes for questions (practising exam questions, collecting data for a psychological study, launching discussion on topics without a right or wrong answer), an independent issue is whether the session as a whole has a fixed plan, or is designed to vary contingent (depending) on audience responses. The obvious example of this is to use questions to discover any points where understanding is lacking, and then to address those points. (While direct self-assessment questions are the obvious choice for this diagnosis function, in fact other question types can probably be used.) This is to act contingently. By contingency I mean having the presenter NOT have a fixed sequence of stuff to present, but a flexible branching plan, where which branches actually get presented depends on how the audience answers questions or otherwise shows their needs. There are degrees of this.

    Contents (click to jump to a section)

    Implicit contingency

    First are simple self-assessment questions, where little changes in the session itself depending on how the audience answers, but the implicit hope is that learners will (contingently i.e. depending on whether they got a question right) later address the gaps in their knowledge which the questions exposed, or that the teacher will address them later.

    Whole/part training

    Secondly, we might present a case or problem with many questions in it; but the sequence is fixed. A complete example of a problem being solved might be prepared, with questions at each intermediate step, giving the audience practice and self-assessment at each, and also showing the teacher where to speed up and where to slow down in going over the method.

    An example of this can be found in the box on p.74 of Meltzer,D.E. & Manivannan,K. (1996) "Promoting interactivity in physics lecture classes" The physics teacher vol.34 no.2 p.72-76. It's a sample problem for a basic physics class at university, where a simple problem is broken down into 10 MCQ steps.

    Another way of looking at this is that of training on the parts of a skill or piece of knowledge separately, then again on fitting them together into a whole. Diagnostically, if a learner passes the test for the whole thing, we can usually take it they know it all. But if not, then learning may be much more effective if the pieces are learned separately before being put together. Not only is there less to learn at a time, but more importantly feedback is much clearer, less ambiguous if it is feedback on a single thing at a time. When a question is answered wrongly by everyone, it may be a sign that too much has been put together at once.

    In terms of the lesson/lecture plan, though, there is a single fixed course of events, although learners contribute answers at many steps, with the questions being used to help all the learners converge on the right action at each step.

    Contingent path through a case study

    Thirdly, we could have a prepared case study (e.g. a case presented to physicians), with a fixed start and end point; but where the audience votes on what actions and tests to do next, and the presenter provides the information the audience decided to ask for next. Thus the sequence of items depends (is contingent) on the audience's responses to the questions; and the presenter has to have created slides, perhaps with overlays, that allows them to jump and branch in the way required, rather than trudging through a fixed sequence regardless of the audience's responses.

    Diagnosing audience need

    Fourthly, a fully contingent session might be conducted, where the audience's needs are diagnosed, and the time is spent on the topics shown to be needing attention. The plan for such a session is no longer a straight line, but a tree branching at each question posed. The kinds of question you can use for this include:

    Designing a bank of diagnostic questions

    If you want to take diagnosis from test questions seriously, you need to come with a large set, selecting each one depending on the response to the last one. A fuller scheme for designing such a bank might be:
    1. List the topics you want to cover.
    2. Multiply these by several levels of difficulty for each.
    3. Even within a given topic, and given level of difficulty, you can vary the type of question: the type of link, the direction of link, the specific case. [Link back]

    Responding to the answer distribution

    When the audience's answers are in, the presenter must a) state which answer (if any) was right, and b) decide what to do next:

    Selecting the next question

    Decomposing a topic the audience was lost with

    While handset questions are MCQs, the real aim is (when required) to bring out the reasons for and against each alternative answer. When it turns out that most of the audience gets it wrong, how best to decompose the issue? My suggestion is to generate a set of associated part questions.

    One case is when a question links instances (only) to technical terms e.g. (in psychology) "which of these would be the most reliable measure?" If learners get this wrong, you won't know if that is because they don't understand the issues, or this problem, or have just forgotten the special technical meaning of "reliable". In other words, a question may require understanding of both the problem case, and the concepts, and the special technical vocabulary. If very few get it right, it could be unpacked by asking about the vocabulary separately from the other issues e.g. "which of these measures would give the greatest test-retest consistency?". This is one aspect of the problem of technical vocabulary.

    Another case of this was about the top level problem decomposition in introductory programming. The presenter had a set of problems (each of which requiring a program to be designed) {P1, P2, P3}. He had a set of standard top level structures {S1,S2, ... e.g. sequential, conditional, iteration} and the problem the students "should" be able to do is to select the right structure for each given problem. To justify/argue about this means to generate a set of reasons for {F1,F2, ...} and against {A1,A2...} each structure for each problem. I suggest having a bank of questions to select from here. If there are 3 problems and 5 top level structures then 2*3*5=30 questions. An example of one of these 30 would be a set of alternative reasons FOR using structure 3 (iteration) on problem 2, and the question asks the audience which (subset) of these are good reasons.

    The general notion is, that if a question turns out to go too far over the audience's head, we could use these "lower" questions to structure the discussion that is needed about reasons for each answer. (While if everyone gets it right, you speed on without explanation. If half get it right, you go for (audience) discussion because the reasons are there among the audience. But if all get it wrong, support is needed; and these further questions could keep the interaction going instead of crashing out into didactic monologue.)


    Last changed 27 May 2003 ............... Length about 900 words (6000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/feedback.html.

    Feedback to students

    (written by Steve Draper,   as part of the Interactive Lectures website)

    While the presenter may be focussing on finding the most important topics for discussion and on whether the audience seems "engaged", part of what each learner is doing is seeking feedback. Feedback not only in the sense of "how am I doing?", though that is vital for regulating the direction and amount of effort any rational learner puts in, but also in the sense of diagnosing and fixing errors in their performance and understanding. So "feedback" includes, in general, information about the subject matter, not just about indicators of the learner's performance.

    This can be thought about as levels of detail, discussed at length in another paper, but summarised here. A key point is that, while our image of ideal feedback may be individually judged and personalised information, in fact it can be mass produced for a large class to a surprising extent, so handset sessions may be able to deliver more in this way than expected.

    Levels of feedback (in order of increasing informativeness)

    1. A mark or grade. Handsets do (only) this if, with advanced software, they deliver only an overall mark for a set of questions.
    2. The right answer: a description or specification of the desired outcome. Handset questions do this if the presenter indicates which option was the right answer.
    3. Diagnosis of which part of the learner action (input) was wrong. When a question really involves several issues, or combinations of options, the learner may be able to see that they got one issue right but another wrong.
    4. Explanation of what makes the right answer correct: of why it is the right answer. I.e. the principles and relationships that matter. The presenter can routinely give an explanation (to the whole audience) of the right answer, particularly if enough got it wrong to make that seem worthwhile.
    5. Explanation of what's wrong about the learner's answer. Since handset questions have fixed alternatives, and furthermore may have been designed to "trap" anyone with less than solid knowledge, in fact this otherwise most personal of types of feedback can be given by a presenter to a large set of students at once, since at most one explanation for each wrong option would need to be offered.

    The last (5) is a separate item because the previous one (4) concerned only correct principles, but this one (5) concerns misconceptions, and in general negative reasons why apparent connections of this activity with other principles are mistaken. Thus (4) is self-contained, and context-free; while (5) is open-ended and depends on the learner's prior knowledge. This is only needed when the learner has not just made a slip or mistake but is in the grip of a rooted misconception -- but is crucial when that is the case. Well designed "brain teasers" are of this kind: eliciting wrong answers that may be held with conviction. Thus with mass questions that are forced choice, i.e. MCQ, one can identify in advance what the wrong answers are going to be and have canned explanations ready.

    Here are two rough tries, applying to actual handset questions posed to an introductory statistics class, at describing the kind of extra explanation that might be desirable here. Their feature is explaining why the wrong options are attractive, but also why they are wrong despite that.

    Example1. A question on sample vs. population medians.

    The null-hypothesis for a Wilcoxon test could be:
    1. The population mean is 35
    2. The sample mean is 35
    3. The sample median is 35
    4. The population median is 35
    5. I don't know
    Why is it that this vocabulary difference is seductively misleading to half the class? Perhaps because both are artificial views of the same real people: the technical terms don't refer to any real property (like age, sex, or height), just a stance taken by the analyst. And everyone who is in the sample is in the population. It's like arguing about whether to call someone a woman or a female, where the measure is the average blood type of a woman or of a female. And furthermore because of this, most investigators don't have a fixed idea about either sample or population. They would like their ideas to apply the population of all possible people alive and unborn; but know it is likely that it only applies to a limited population; but that they will only discuss this in the last paragraph of their report, long after getting the data and doing the stats. Similarly, they are continually reviewing whom to use as a sample. So not only are these unreal properties that exist only in the mind of the analyst, but they are continually shifting there in most cases. (None of this is about casting doubt on the utility of the concepts, just about why they may stay fuzzy in learners' minds for longer than you might expect.)

    Example2. Regression Analysis: Reading versus Motivation

    PredictorCoefSE CoefTP
    Constant2.0741.9801.050.309
    Motivati0.65880.36161.820.085
    The regression equation is Reading = 2.07 + 0.659 Motivation
    S = 2.782     R-Sq = 15.6%     R-Sq(adj) = 10.9%

    Which of the following statements are correct?
    a. There seems to be a negative relationship between Motivation and Reading ability.
    b. Motivation is a significant predictor of reading ability.
    c. About 11% of the variability in the Reading score is explained by the Motivation score.

    1. a
    2. ab
    3. c
    4. bc
    5. I don't know
    There was something cunning in the question on whether a correlation was significant or not, with a p value of 0.085. Firstly because it isn't instantly easy to convert 0.085 to 8.5% to 1 in 12. 0.085 looks like a negligible number to me at first glance. And secondly, the explanation didn't mention the wholly arbitrary and conventional nature of picking 0.05 as the threshold of "significance".

    For more examples, see some of the examples of brain teasers, which in essence are questions especially designed to need this extra explanation.


    Last changed 21 Feb 2003 ............... Length about 700 words (5,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/manage.html.

    Designing and managing a teaching session

    (written by Steve Draper,   as part of the Interactive Lectures website)

    Any session or lecture can be thought of as having 3 aspects, all of which ideally will be well managed. If you are designing a new kind of session (e.g. with handsets) you may want to think about these aspects explicitly. They are:

    Feedback to the presenter

    In running a session, the presenter has to make various judgements on the fly, because they must make decisions on:


    Last changed 21 Dec 2007 ............... Length about 200 words (3,000 bytes).
    (Document started on 6 Jan 2005.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/qbanks.html. You may copy it. How to refer to it.

    Question banks available on the web

    By Steve Draper,   Department of Psychology,   University of Glasgow.

    This page is to collect a few pointers to sets of questions that might be used with EVS that are available on the web. Further suggestions and pointers are welcome.

    For first year physics at University of Sydney: their webpage     and a local copy to print off as one document.

    The Galileo project has some examples if you regester online with them.

    The SDI (Socratic dialog Inducing) lab has some examples.

  • Physics: Joe Redish's list of Mazur type questions i.e. "ConcepTests"

  • Chemistry ConcepTests

  • Calculus questions

  • JITT: just in time teaching: example "warmup questions"

  • Canadian In-Class Question Database (CINQ-DB) for astronomy, mathematics, physics, psychology, and science.

    ?Roy Tasker