Last changed 9 Apr 1998 ............... Length about 1,000 words (10,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/mant/beards.html. You may copy it. How to refer to it.

(Back up to current central page)

MANTCHI evaluation plans

Contents (click to jump to a section)


A Technical Memo
by
Stephen W. Draper
Department of Psychology
University of Glasgow
Glasgow G12 8QQ U.K.
email: steve@psy.gla.ac.uk
WWW URL: http://www.psy.gla.ac.uk/~steve

Preface

This memo is written in response to the paragraph titled "Evaluation" in a letter from David Beards dated 27 Feb. 1998 to Julian Newman concerning his presentation about MANTCHI to the UMI steering group. It was agreed within MANTCHI that I should write this response, and a letter from Julian to David Beards notified him that I would be doing this.

This memo is being faxed to David Beards on 9 April, sent by post at the same time, and circulated to MANTCHI members by WWW.

Our evaluation plans and views

By "evaluation" we mean observations and measurements of the educational value of the learning and teaching activities we have undertaken as part of the MANTCHI project.

We are asked to give our evaluation plans in detail. Attached (to the paper copies only) is a list of the 19 evaluation studies completed or in progress to date. They are all studies of tutorial activities being delivered to university classes.

Our approach to evaluation follows the Integrative Evaluation method we developed on the TILT project, published in [1]. A provisional report on how we are adapting that for the current project is in [2]. Central to the approach is the realisation that the important outcome is the learning accomplished by students, and that this is not caused by "material" but by the combined effect of the overall delivery of all the relevant resources available to the students. (Data on the relative importance of different resources is gathered in most of our studies by our resource questionnaire instrument [3].) It is misconceived to think that a meaningful evaluation of a piece of software as an isolated cause of learning can be done, as learning in all normal situations depends on multiple factors [4]. Furthermore the idea that instruction, as opposed to the learners' actions, causes learning (a view sometimes called "instructivism" or "objectivism") is now so widely discredited that it is difficult even to publish arguments against it as referees usually regard it as a straw man.

We are therefore puzzled by your ".. concern... to ensure that both the quality of the materials and their use when integrated into courses are evaluated separately." We are aware that some people write reviews like book reviews of learning material, but regard that as about as reliable as a book review in predicting its value: rather like "evaluating" a car by looking at it in a showroom or having a racing driver try it out, instead of observing trials with typical users in normal conditions. Such "evaluations" do not observe learning taking place and can at best only make indirect guesses about whether it does and what the problems might turn out to be in practice.

Our views apply to the evaluation of computer based learning of all kinds, but
apply particularly strongly here because of some particular features of MANTCHI which perhaps you have overlooked. Firstly, MANTCHI is concerned with "tutorial" material (defined broadly as anything other than primary exposition like textbooks and lectures). Such material is particularly closely dependent on the course context in which it is used. Secondly a feature of most of our "ATOMs" (units of learning and teaching activity) is the role of a remote expert interacting in some way over the MANs. Clearly the issue is not the "material" alone, but whether it is useful to have such an expert and interaction.

Outcomes and products of the project

A deeper question is what the main outcomes and products of MANTCHI will be, and how they will be delivered to a wider community. A key feature is our development of the notion of "ATOM" (a unit of tutorial material exchanged between sites), and the deveopment of reciprocal collaborative teaching where all sites both author some units and receive (deliver) units from elsewhere. This not only means imposing quality control (none of us will jeapordise our courses by delivering material we regard as inadequate), but gives us insight into what it feels like to deliver others' material.

With hindsight, the most important feature of the project may be that it has been based on true reciprocal collaborative teaching. All of the four sites have authored material, and all four have received (delivered) material authored by other sites. Although originally planned simply as a fair way of dividing up the work, it has kept all project members crucially aware not just of the problems of authoring, but of what it is like to be delivering to one's own students (in real, for-credit courses) material that others have authored: a true users' perspective. This may be a unique feature. Certainly many TLTP projects have been characterised by teams of authors who develop material and then go around wondering why so few other people want to use their wonderful material, even though they themselves would never use anyone else's. Although we can only claim credit for evolving a good practice rather than for principled project planning, MANTCHI has in effect built users of HE teaching material further into the design team by having each authoring site deliver material "not created here". The nearest to this situation that we are aware of is the EUROMET project, where although only some of their sites are authoring units, all are expected to use units authored by other sites.

In MANTCHI, we call the units of material thus exchanged "ATOMs" (Autonomous Teaching Object in Mantchi). Their typical size is one week's work on a module (say approximately eight hours work for a student). This was chosen because it is a typical unit for planning courses. Small units of exchange are very important. (EUROMET originally aimed for even smaller units: 15 minutes of learner time.) Firstly, even apart from the powerful forces of idiosyncratic preference in a deliverer, different situations in different HEIs and courses require different materials. Small units increase the chances of being able to use some of what is on offer from elsewhere. Secondly, it is much easier in practice to introduce a few ATOMs on an existing course than to redesign the whole course. Thirdly, most deliverers will want to try out only one or two ATOMs to see if it is going to work out before committing to greater use.

Another benefit of reciprocal arrangements is that it reflects the basic structural fact of HE that experts are distributed across the country (or indeed the world), each with their specialism; yet all are required to deliver relatively general courses. Exchanging material matches this distribution of expertise, while avoiding difficult accounting issues that would be raised if the effort became seriously asymmetrical. The internet is making such exchange ever easier without the time penalties of travelling, although teaching puts potentially serious demands on bandwidth (increasing current traffic, which is largely researcher to researcher, roughly by a factor of the staff:student ratio).

(The last few paragraphs are an extract from [5].)

Products

The main product is probably these ideas and experiences of implementing them, and the vision they offer of how collaborative teaching may be organised in future. How will this be made available to others? Primarily in written reports and papers describing our main conclusions and the practices we have developed.

In addition, the ATOM materials are being made available on the web, although they will probably always need adaptation to each delivery (e.g. names, dates, URLs etc. etc. will always be different for each course), just at the handouts I use in my courses need re-editing each year. However an integral part of most ATOMs was the author acting as a remote expert for the learners. This is not a "material" that can be duplicated and given away: it can only be exchanged for similar offers made by others.

The evaluation reports will have many useful observations of what worked well, and what needs to be improved. Often what needs improving is things about the delivery: about what extra support is needed, about the necessity for making work compulsory to get it done at all, and so on.

In fact the main weight of convincing is probably going to come from what are in effect case studies: the knowledge that these ATOMs have been not only authored but delivered successfully in real courses for credit, and that that success was evaluated in some depth by classroom studies. While this does not prove their value scientifically, nor guarantee that they would work in a different situation, it is still vastly more evidence than is available for most pieces of CAL or textbooks, and is evidence about learning, not about technical features or the opinion of those who have not tried to learn from them.

References

[1] Draper,S.W., Brown, M.I., Henderson,F.P. & McAteer,E. (1996) "Integrative evaluation: an emerging role for classroom studies of CAL" Computers and Education vol.26 no.1-3, pp.17-32

[2] Draper, S.W. (1997) "Developing evaluation for MANTCHI"
URL: http://www.psy.gla.ac.uk/~steve/meval.html

[3] Brown, M.I., Doughty,G.F., Draper, S.W., Henderson, F.P. and McAteer, E. (1996) "Measuring Learning Resource Use." Computers and Education vol.27, pp. 103-113.

[4] Draper, S.W. (1997) "Prospects for summative evaluation of CAL in higher education" ALT-J (Association of learning technology journal) vol.5, no.1 pp.33-39

[5] Draper, S.W. (1998) "Reciprocal collaborative teaching in MANTCHI"
URL: http://www.psy.gla.ac.uk/~steve/tltpnl.html

(Back up to current central page)