Web site logical path:
[ www.psy.gla.ac.uk]
[~steve]
[mant]
[other formats]
[this page]
by
Stephen W. Draper
Department of Psychology
University of Glasgow
Glasgow G12 8QQ U.K.
email: steve@psy.gla.ac.uk
WWW URL: http://www.psy.gla.ac.uk/~steve
For more information on the MANTCHI (Metropolitan Area Network Teaching of Computer-Human Interaction) project: see http://mantchi.use-of-mans.ac.uk/
The MANTCHI project was carried out by Glasgow Caledonian University, Heriot-Watt University, Napier University, and the University of Glasgow, and funded by the Scottish Higher Education Funding Council (SHEFC) through the Use of MANs Initiative (UMI phase 2).
This report is mainly assembled from drafts of three other papers as they were on 24 August 1998, which comprise parts B, C, and D, and which may be published elsewhere. Those papers will be revised and improved in the near future, but there are no plans to update this report.
The evaluation work comprised about 20 studies. Obviously full reporting of so many studies, even though of modest size, is not possible within a single paper. On the other hand, such a paper can (and this paper does) select the main conclusions to report, and draw on evidence across multiple studies to support them. It can also discuss evaluation methods and problems on a broader basis than a single study could.
The paper is divided into two parts, the first dealing with overall issues and the second with particular findings. The first part introduces the project's distinctive features and the particular demands these placed on evaluation. It then discusses our approach to evaluation, based on the method of Integrative Evaluation and so emphasising observations of learners engaged in learning on courses for university qualifications. The second part is organised around the major findings, briefly reporting the evidence for each. These findings are grouped into perspectives: learning effectiveness and quality, features of the teaching, and issues of the management of learning and teaching.
Teaching and learning normally includes not only primary exposition (e.g. lectures) and re-expression by the learners (e.g. writing an essay), but some iterative interaction between teacher and learners (e.g. question and answer sessions in tutorials, or feedback on written work). Mayes (1995) classifies applications of learning technology into primary, secondary, and tertiary respectively by reference to those categories of activity. Technology such as email and video conferencing supports such tertiary applications. MANTCHI focussed on tertiary applications, and the additional research question of whether such interactions can usefully be captured, "canned", and later re-used. We called such canned material "TRAILs".
A key emerging feature of the project was its organisation around true reciprocal collaborative teaching. All of the four sites have authored material, and all four have received (delivered) material authored at other sites. Although originally planned simply as a fair way of dividing up the work, it has kept all project members crucially aware not just of the problems of authoring, but of what it is like to be delivering to one's own students (in real, for-credit courses) material that others have authored: a true users' perspective. This may be a unique feature. MANTCHI has in effect built users of higher education teaching material further into the design team by having each authoring site deliver material "not created here".
The project devoted substantial resources to evaluation of these teaching and learning activities. This paper offers a general report on that evaluation activity.
This is bad news for simple summative evaluation that aims to compare alternative teaching to decide which is best, as Draper (1997b) argues. However evaluation is still possible and worthwhile, but turns out to be mainly useful to teachers in advising them on how to adjust the overall teaching to improve it: effectively this is formative evaluation of the overall teaching and delivery (not just of the learning technology or other intervention), called "integrative" because many of those adjustments are to do with making the different elements fit together better.
Consistent with that formative role, we have found that many of the most important findings in our studies have been surprises detected by open-ended measures, and not answers to questions we anticipated and so had designed comparable measures for. By "open-ended" we mean that the subject can respond by bringing up issues we did not explicitly ask about, so that we cannot tell how many in the sample care about that issue, since they are not all directly asked about it and required to respond. For example, we might ask "What was the worst problem in using this ATOM?" and one or two might say "getting the printer to work outside lab. hours". The opposite category of measure is "comparable", where all subjects are asked the same question and required to answer in terms of the same fixed response categories, which can then be directly compared across the whole sample. We use about half our evaluation effort on open-ended measures such as classroom observation and open-ended questions in interviews and in questionnaires, with the rest spent on comparable measures such as fixed-choice questions applied to the whole sample.
In previous applications of Integrative Evaluation, an important type of comparable measure has been either confidence logs or multiple choice quiz questions that were closely related to learning objectives in order to measure learning gains. In MANTCHI, while extensively used, they are of less central importance, as the material of interest is not the primary exposition but student exercises and the interactions and feedback associated with it. Furthermore, when we asked teachers why they used the exercises they did, their rationales seldom mentioned learning objectives but seemed to relate to aims for a "deeper" quality of learning. We tended to make greater use of resource questionnaires (Brown et al. ,1996) to ask students about the utility and usability of each available learning resource (including the new materials) both absolutely and relative to the other available resources. In this way, we adapted our method to this particular project to some extent, although as we discuss later, perhaps not to a sufficiently great extent.
A typical study would begin by eliciting information from the teacher (the local deliverer in charge of that course) about the nature of the course and students, and any particular aims and interests that teacher had that the evaluation might look at. It would include some classroom observation, and interviewing a sample of students at some point. The central measures were one or more questionnaires to the student sample, most importantly after the intervention at the point where it was thought the students could best judge (looking back) the utility of the resources. Thus for an exercise whose product was lecture notes, that point was the exam since lecture notes are probably most used for revision. For a more typical exercise where the students submitted solutions which were then marked, we sometimes chose the time when they submitted their work (best for asking what had been helpful in doing the exercise) and sometimes the time when they got back comments on their work (best for judging how helpful the feedback was). An example questionnaire from a single study is given in an appendix.
This gives a large set of small sections. One way of grouping them would be by stakeholder perspective: what did the learners think? what did the teachers think? what would educational managers e.g. heads of department think? The grouping adopted here is slightly different, into learning issues (e.g. what can we say about learning effectiveness and quality), teaching issues (e.g. is it worthwhile having remote experts?), and management issues (e.g. tips for organising the delivery of ATOMs).
There is some correlation between this grouping and the methodological one of comparable versus open-ended measures. Because we knew in advance we were interested in the issues of learning effectiveness we designed comparable measures for this, whereas most of the management issues emerged from open-ended measures (and complaints). However the correspondence is only approximate: for most issues there is some evidence from both kinds of measure.
Instead, we may ask questions about the value of the novel learning resources offered to students in this project, both generally and in comparison to others available for the same topic: were they valued or not, important or of little impact? These questions were mainly addressed through using forms of the resource questionnaire (Brown et al. 1996) which asks students to rate the utility of the learning resources available to them.
The main evidence came from a short questionnaire which, since lecture notes find their main use when revision for exams is being done, was administered directly after the exam. Of 59 students, 98% responded; and of these 84% said they had referred to the communal lecture notes, 76% said they found them useful, and most important of all, 69% said they found them worth the effort of creating their share of them. They also, as a group, rated these web notes as the third most useful resource (after past exam questions and solutions, and the course handouts). This shows that, while not the most important resource for students, nor universally approved by them, this exercise had a beneficial cost-benefit tradeoff in the view of more than two thirds of the learners.
This is clearly a mixed story. The unfavourable responses are clearly associated in the data as a whole with a high number of complaints about delivery (rather than content) issues, which are discussed below, and also with it not being directly assessed or compulsory. An interesting point here, though, is that the method was most favourably received (at least for the UAN material) where the author was also the local deliverer: so that everything was constant except the formatting as an ATOM. This is direct comparative evidence on the ATOM format itself.
Students were asked "Was the 'workload' of the ATOM right for you?" on a 5 point scale. (This question appears in the appendix.) It is interesting that at a university where the ATOM was compulsory, only 32% of the students (who were in year 3 of 4 undergraduate years) rated it above (harder) than the neutral point, whereas at the university where it was not directly assessed (and the learners were a mixture of year 4 and M.Sc. students) 62% rated it as harder work than seemed right.
Open-ended measures suggest a mixed story. For instance at one university a class contained both year 4 undergraduates and M.Sc. students. The latter perceived much more benefit in having a remote expert than the former. Elsewhere, some students suggested that a benefit of remote experts was not that they had more authority (as national or international experts in the topic), but that because they were not in charge of assigning marks, the students felt freer to argue with them and challenge their judgements. This, if generally felt, would certainly be an advantage in many teachers' eyes, as promoting student discussion is often felt to be difficult. It is also interesting in that it is largely the opposite of what the teachers' seemed to feel. For them, the remote expert gave them confidence to deliver material they did not feel a deep grasp of, and to handle novel objections and proposed solutions that students come up with.
When the statechart ATOM was delivered at one university, there were 50 in the class, of which 24 completed the questionnaire while 15 did the exercise, but only 9 did both. Of these, 6 used the TRAIL, and all 6 of these found it at least "useful". When the same ATOM was delivered at a second university, there were 11 in the class, of which 9 returned the questionnaire while 6 did the exercise as well as the questionnaire. Of these, 2 used the TRAIL and both of these found it useful. Thus although we may say that 100% of those who used a TRAIL rated it a useful resource, the numbers using it (whether from choice or simply from happening to notice it) were too low to give much certainty about this positive result by themselves. Open-ended comments were a second source of evidence supporting the positive interpretation, although on an even slenderer numerical base: "[They] Gave an indication of what was expected, though we felt the quality of the submissions was generally poor, we had no knowledge of the acceptable standard required.", and "Bad examples more useful than good. Can see how (and why) NOT to do things. This is much better than being told how to do something 'this way' just 'because'."
Still other kinds of evidence also suggest that this is an important resource to develop further. Firstly, theoretical considerations on the importance of feedback for learning support it. Secondly, in the CSCLN ATOM, the most valued resource overall was past exam questions and outline answers: a similar resource to TRAILs. Thirdly, and perhaps most important, in courses without such resources the open ended responses frequently ask for more feedback on work, model answers, and so on, strongly suggesting a widespread felt need for resources of this kind.
These findings did not mainly emerge from comparable measures designed to test learning outcomes, but usually from open-ended measures that yield (among other things) complaints by students: mainly open-ended questions in questionnaires adminstered to whole classes, interviews with a subset of nearly every class we studied, and the direct classroom observations we did in a majority of our studies. In the next part of this report (part C) we present these findings, together with suggestions for responses. Full lists of them, usually with the student comments transcribed in full, were fed back to the course deliverers for use in improving delivery next time.
Nevertheless we were alert for any negative summative evidence, such as a sudden slump in exam results, which if found, would certainly signal a problem requiring attention. No significant change in exam results emerged. Our own measures of learning outcomes were mainly in students' self-estimates (confidence) about topics. But although these measures indicated satisfactory learning outcomes, they could not be compared across years to get ATOM vs. non-ATOM measures, as a main consequence of the project was to introduce new curriculum content.
We failed to devise measures, as perhaps we should, to test whether the aims expressed in the TPRs had been achieved. This might have been a more appropriate summative measure in this or any project about tutorial material than the usual measures of learning objective achievement such as quiz and exam results, which focus on facts and specific skills, not depth of understanding or strength of belief or appreciation of a problem.
Even given our evaluation designs, we had some significant problems in executing them. With hindsight, this is most obvious in our missing the opportunities for good comparisons for one ATOM delivered in several universities and for several ATOMs delivered in a single course at a single university. We missed making the most of these opportunities by failing to ensure that most or all the students took the ATOMs (increasing numbers opted out in some courses), and even where students did the ATOM work, we sometimes failed to get them to complete the evaluation instruments.
Allowing students the option of whether to do the ATOM, either explicitly or implicitly by awarding no direct assessment credit for the exercises, seems liberal but undermines the potential for learning from the innovation. It is probably a mistake to view this as giving students a free choice and perhaps inferring from the low uptake that the material was inferior to other material. In fact, the assignment of credit by teachers seems to be one of the most powerful signals (as perceived by students) about what is important: no credit implies that the teacher doesn't value it. On the other hand, the impulse not to force students to take new material reflects the wisdom of experience in introducing innovations. Arbitrary funding constraints set this project at less than two years. A more sensible duration would be three years, with the first spent in preparations and studying the "status ante", the second in introducing the innovations in a rather uncontrolled way in order to catch and remedy the main problems while minimising the risk to those students' interests, and the third in a more controlled and uniform delivery. Our actions in our second and final project year turned out to be a compromise between the plans, both desirable, appropriate to a second and third year.
The failure to get a good response rate to data collection in many cases is again an issue related to the unwillingness of the teachers to impose pressure on their students. In this project, as in others, we found teachers to be staunch defenders of their students against the rapacious desire of the evaluators to extract huge amounts of data regardless of the danger of alienating students by exhausting them. Furthermore, possibly less creditably, teachers are reluctant to sacrifice "their" contact time to having students (for example) fill in a questionnaire, even though this is essentially an activity in which the teacher learns from the students about the state of their shared teaching and learning activity. Yet the use of such scheduled time for evaluation did turn out to be important. Our attempts to use lab. times were not very effective as only a minority of students on these courses would be present at any one time. Our attempts to use a WWW questionnaire met with a very low response rate, apparently mainly because the students did not have "processing web pages" as a regular activity, especially not one that involved the definite work of filling in a questionnaire. Email questionnaires, though somewhat better since students often did have replying to their email as a regular activity, still brought a fairly low response rate. As sales people know, personal contact and an immediate deadline (however artificial) both greatly enhance the response rate. Hence, while the reasonable interests of students must be considered, time at scheduled class meetings does seem a requirement of effective data gathering. An interesting case was the gathering of crucial data for the CSCLN ATOM at an exam. Even the evaluators were worried that this would cause protest, but in fact almost complete compliance was obtained without any objection, though the voluntary nature of the exercise was stressed. But on close consideration, it can be seen that the request was in proportion (5 minutes of questionnaire filling tacked on to 2 hours of exam), it was at a time when stress was relieved (after, not before, the exam) and when the subjects had no other urgent engagement to go to, and it had the personal element (the invigilator was the teacher who wanted the data and was making the appeal face to face).
With hindsight, then, we can see what went wrong and what we could have done better. Different project members exhibited different strengths, and it is probably no coincidence that the one who got the best evaluation data did the least in delivering ATOMs authored by others and setting up potentially interesting comparisons, while those who did the most at the latter were least effective in organising the extraction of a good data set. The project was effective in collaborating in producing material and in exchanging it for delivery in other institutions, but that collaboration should have been pushed through to a greater extent in planning the evaluations. Without effective data gathering, much of the other work loses its value. In effect, the evaluators had a plan and designed satisfactory instruments in every case, but sometimes failed to secure enough active cooperation to reap the benefits.
Integrative evaluation is, in contrast to controlled experiments, organised to a great extent around the teachers delivering the material. This is because it aims to study delivery in real teaching situations. The shortcomings in data gathering in this project show that despite that orientation, it is necessary to secure some concessions from teachers for the benefit of the evaluation goals in the form of scheduled class time for evaluation and of ensuring that students do use the material by giving credit for it in order to learn from the innovations. While evaluation can and should be planned with the teachers and around their constraints, we do have to insist to a greater extent than we did in this project on formulating a definite evaluation plan in each case capable of leading to definite findings, pointing out that otherwise much of the effort of the project is wasted. In fact what is not hard -- and should be done -- is to enlist student sympathy by presenting the aims of the whole intervention, how their data will affect the quality of future teaching, and the whole plan complete with the evaluation actions they will be asked to cooperate with.
Failing to get better comparative data on learning outcomes is probably of little importance. All the indications we have suggest that the quality of learning from ATOMs is at least as high as from other similar material (i.e. learning quality per topic was at least maintained on average), but that there was no increase in quality so large as to make that the important feature of the project. Instead, the real gains lie elsewhere: in the introduction of topics that the local deliverers of a course judge desirable but would otherwise not teach. In other words, the gains are in curriculum content, not in how a fixed curriculum is presented. Evaluation based on student responses is important to check that quality is at least maintained, but is unlikely to yield evidence about the value of changing the curriculum. The evaluation we did, while it could be improved, seems adequate for the summative purpose of quality checking, as well as for the formative purpose of guiding improvements in the overall delivery of the teaching.
Our studies suggest however that the main gains are in improved curriculum content and in staff development (expanding the range of topics a teacher is confident of delivering), but that different evaluation methods must be developed and applied to study that properly.
So with hindsight we should have spent less effort on learner evaluation and more on teacher evaluation: on whether they felt their work was better or worse, and more or less difficult. Cost measures would be a crucial part of this, as they will determine whether the project is carried forward: i.e. would the teachers use ATOM material without the extra motivation of participating in a funded project?
This report presents collected lessons, findings and recommendations, learned during the MANTCHI (1998) project as a result of 20 or so evaluation studies, concerning tutorials and how to deliver web-based tutorial support (ATOMs) to students on HCI courses in 4 Universities. (An overview of their findings is given in another report: Draper & Brown, 1998.)
This report is structured into the following sections:
C1&2. The lessons (tutorials and delivery of ATOMs): if you want to know
what we recommend, just read these.
C3. The basis: a short discussion of the kinds of evidence underlying this
report, which you should read if you wonder just how much faith to put in
them.
C4. The theoretical view: a short discussion of the kind of lessons these
are.
2. Practical experience
Students reported the importance of practical experience of actual
interfaces, exercises, examples etc. and considered that they required more of
these on their courses along with more practical experience of "new
technology". Some students studying several formalisms suggested applying the
different formalisms to the same interactive device.
3. Feedback
Students valued feedback and considered small tutorial groups were ideal
for this. Even without the expected feedback, many still valued the practical
experience of exercises.
4. Collaboration
Students were on the whole enthusiastic about collaborating with
students in other universities. Those who had been involved in the first
MANTCHI collaborations identified some of the benefits (seeing /hearing other
students' experiences) and disadvantages (having to be available at the same
time as the other students) of collaborating.
5. Information about the students involved
Students at different levels/courses may have different "requirements"
and may require different kinds of tutorial support.
6. Video conferences
Students who had been involved in a video conference, considered that
these should only be held for a specific, well defined purpose. Technical
problems can interfere with a conference especially if lecturers had not
experienced video conferencing before.
2. Web-based vs. Paper Resources
Web-based instructions and resources may also need to be given to
students on paper. During student use of some ATOMs, lecturers handed out
paper-based instructions and resources. In some cases this was because the
students were unable to access the web-based resources, in other cases it was
because the lecturer wished to give the students additional instructions which
superseded those on the ATOM web page.
Students usually download and print the web-based resources which is less efficient than these resources being centrally copied on to paper and handed out. Students reported that though it can be useful to access information etc. electronically, this is not always possible and anyway they like having a hard copy that they can make notes on. This also covers the problem of the network not functioning when needed by the students. It is also likely that the students will not have continuous access to computers while completing their assignments.
3. Information about the students involved
Knowledge of the students' previous experience is useful to lecturers
involved in collaborative teaching before lecturing/conducting a remote
tutorial. Local teachers need to brief remote teachers on this.
4. Assessment of ATOM tasks
If the ATOM tasks are not directly assessed, students may not complete
the tasks. Where courses contain more than one ATOM and the tasks are not
directly assessed, students may be less likely to complete the tasks for the
later ATOMs. If the ATOM tasks are not directly assessed, students are more
likely to report that the workload is too heavy than if it is directly
assessed.
5. Students' Expectations
Expectations should be clear. ATOMs may involve remote experts,
web-based instructions, and learning resources as well as some "in house"
lectures, handouts and other resources. Students require information about what
is available to them and what is expected of them in the way of self-tuition
(resource-based learning) etc.
6. Content of ATOM
It should be clear if local instructions about the assignments
(completion, submission etc.) differ from those on the ATOM home page. The ATOM
(or the course web page) should contain clear information on: which resources
will be delivered locally (in house), what to use: (e.g. a real physical radio
alarm in an exercise on formal descriptions); access passwords; the date
solutions should be submitted; exactly how solutions should be submitted; the
approximate date on which web-based feedback will become available.
7. Time
Instructions about the ATOM resources and assignments have to be sent to
students in plenty of time. Students do not all check their e-mail every day.
Students admitted that even if they are given information in plenty of time
they may not act on it. However where web-based resources (or any resources)
have to be used before an assignment is to be attempted, students have to be
given clear instructions in plenty of time for them to be able to plan and use
the resources. They have to have the information to allow them to manage their
time effectively.
8. Remote Expert and Local Deliverer
It should be clear to students whether the "in-house" teachers are
"experts" or "facilitators". Each ATOM has a domain expert. The lecturer
delivering the ATOM to his/her students need not be an expert in the subject.
It is useful if the students are made aware that the lecturer may be
"facilitating" rather than "teaching" and also that the work will involve
"resource-based" learning utilising the ATOM web-based resources and a domain
expert.
9. Feedback from Domain Expert
Students should be alerted when the web-based feedback on their
solutions is available. They should also be alerted when feedback on the
solutions from other universities is available. I.e. posting them on the web
without e-mailing an announcement is unsatisfactory.
10. ATOMs involving Group Work
Group work involves extra organisation and time which has to be taken into
account. Students recognised the benefits of group work, but found that it
took more time than working in pairs or alone. This appeared to matter more
where the task was not directly assessed. If possible, group work should be
mainly within regular timetabled sessions of course to avoid clashes between
courses. Similarly video conferences should also be within regular timetabled
sessions. (The general problem is that of organising group meetings and
irregular class meetings, which suddenly require new times to be found in the
face of, for many students, conflicting classes and paid employment.)
11. New Types of Resources
Students may need to be encouraged to access and use new types of
resources
Students varied in their use and reaction to the resources available on
an ATOM. Many students did not use the TRAILs and other solutions and feedback.
Until they become familiar with such resources they may need to be encouraged
to use them. However we do have some evidence of the resources including
solutions and feedback (tertiary resources) being re-used by some students
while completing their projects/ essays.
12. HyperNews Discussion Forum
In future it may be necessary to manage the discussion in some way as
the discussion forum was hardly used. During the use of the first two ATOMs
this was really just used as a notice board for submitting solutions and
getting feedback from the "Remote Expert". One student who did not attend the
ATOM lab/tutorial sessions reported using the solutions and feedback on
Statecharts and ERMIA to learn/understand these formalisms. In later ATOMs,
solutions were submitted on web pages.
13. Collaboration between Students from different Universities
Rivalry between students at different Universities can result from ATOM
use. Although this can be a good thing, we have to be careful to avoid the
collaborations from discouraging some students from actively participating.
Collaboration is mainly perceived as a benefit by students but on one of the
ATOMs involving students at two Universities, comments from students at both
universities indicated some rivalry and annoyance at comparisons used in the
feedback.
14. The Integration of ATOMs into Courses
ATOMs are discrete units. The point has been raised that ATOMs could
fragment a course, reducing the possibility of relating that topic to other
parts of the course. This could be a problem especially if several are used.
It is something that should be kept in mind. Integration of the ATOMs may be
improved by asking students to write a report involving the topics studied on
the ATOMs used, as this appeared to be successful with some referring back to
the solutions and feedback.
These findings did not mainly emerge from comparable measures designed to test learning outcomes, but usually from open-ended measures that yield (among other things) complaints by students: mainly open-ended questions in questionnaires administered to whole classes, interviews with a subset of nearly every class we studied, and the direct classroom observations we did in a majority of our studies. Full lists of the lessons, usually with the student comments transcribed in full, were fed back to the course deliverers for use in improving delivery next time. For one example item, we give details of the evidence on which the finding and recommendations were based. Should it be important to clarify an issue first identified by open-ended measures, then a more systematic measure can be applied. For instance, when the difficulties of group-work, and claims about high work load appeared, we then designed some systematic measures of these to investigate them further. Similarly, should one of the lessons in this report be particularly important to you, then you should include some specific measures of it in your own evaluations.
In one study, all were asked if they had any problems while accessing the web-based resources. 25% reported some problem, examples being "Password problems plus early setbacks with software.", "On learning space -- crashes". In a second study, all students were asked "Did you experience any difficulty gaining access to any resources / activities during the use of the ... ATOM?" 3 (13.6%) reported problems: "Remote web page" "server was down from where I had to access on-line." "Lab was too busy during lab sessions". They were also asked about resources for which there was insufficient time, which yielded comments including these: "Remote web page was too remote, took a very long time to view", but another student said "None! Most are web-based and therefore can be accessed at any time, when most convenient". In a third study, students were asked "What else would have helped at the two tutorials this week?" which elicited an 83% response rate including this long reply: "Computer equipment that worked! A lot of time was wasted in tutorials trying to fight with the equipment being used. It is not a necessity to teach through the use of computers when teaching to a computer course. In fact the opposite is true because computer students above all recognise the problems that can occur by over complicating a problem by using advanced computing e.g. the newsgroup on a web site (where a simple newsgroup added on to [the] news server would have achieved the same inter-communication and been far more reliable/faster than web browsing) and using the scanner (where simply drawing the chart on the computer would have been much faster and produced much clearer results for everyone to view). This is not a criticism of the ATOMs or the teaching method but more of the implementation which although seeming perfectly reasonable proved only to hinder our progress in learning about this topic!" In a fourth study, students were asked "Did you print the ATOM information and scenario from the Web?"; 10 (45.5%) said yes, 12 (54.5%) said no. They were asked if they used the paper or web form: 7 (31.8%) said paper, 10 (45.5%) said web, 4 (18.2%) said both. They were asked to explain why; among the numerous comments were "I like to save paper", "I took the work home", "some documents don't print well", "Web-based was easier to refer to related documents because of links". In a fifth study, printed versions were provided but students were asked if they had already printed out the web documents: 25% said yes. When asked which form they used, 45.8% used paper form, 20.8% the web form, 12.5% used both, and 20.8% didn't answer. In a sixth study, when asked how the ATOM compared to traditionally delivered units, one student said "Personally, I do not like using the net as a learning aid, I spend enough time working on a PC as it is without having to rely on the World Wide Wait to scroll through text on screen. Call me old fashioned, but I do prefer reading from books/journals/papers - a bit more portable and quicker to access - I wish I'd recorded how much time I waste during a week logging on, waiting for Win95 to start, waiting for Netscape etc. etc. etc. If I have an hour free in between lectures it is just impractical to get any work done on a PC." In a seventh study, when asked to comment on "How useful do you consider the ... ATOM Web-based resources were to you in learning & understanding ...? ", two explanatory comments (for low usefulness ratings) were "Items in pdf format prohibited many people viewing the docs", and "Paper-based notes are easier to manage and access. Paper notes don't crash!"
In some descriptions of the educational process these issues are called delivery or implementation (cf. Reigeluth; 1983). From our perspective of seeing learning as the outcome of a whole set of activities (not the one-way delivery of material), we categorise these issues as the management of the learning and teaching process: about co-ordinating and organising those activities, rather than designing their content. This view is presented as an extension to the Laurillard model in Draper (1997), and seen as at bottom a process of negotiation (tacit or explicit) between teachers and learners.
Many of the lessons in this report may seem obvious to readers, not so much from hindsight but because they are familiar points in the education literature. They are often rather less familiar to higher education teachers (who seldom read that literature), and who have very many such practical details to deal with in delivering any course (another reason for calling them "management" issues). This suggests that many gains in learning and teaching quality might be made, not by technical and pedagogical innovation, but by attention to best practice at this management level, backed by integrative evaluation to detect and feed back those points that emerge strongly as issues in each particular case.
As with studying learning benefits, the issues are likely to be complex and the literature is much less developed (but see, for example, Doughty, 1979; Doughty, 1996a, 1996b, and Reeves, 1990). Identifying the kinds of benefit (and disbenefit), some of them unanticipated, is at least as important as taking measurements of those kinds that were expected. This paper reports an attempt at this, based on interviewing 10 teachers involved in an innovative project on remote collaborative tutorial teaching.
A key emerging feature of the project was its organisation around true reciprocal collaborative teaching. All of the four sites have authored material, and all four have delivered material authored at other sites. Although originally planned simply as a fair way of dividing up the work, it has kept all project members crucially aware not just of the problems of authoring, but of what it is like to be delivering to one's own students (in real, for-credit courses) material that others have authored: a true users' perspective. This may be a unique feature. MANTCHI has in effect built users of teaching material further into the design team by having each authoring site deliver material "not created here". It is also a system of collaborative teaching based on barter. This goes a long way to avoiding organisational issues of paying for services. However the details may not always be straightforward, and the future will depend upon whether costs and benefits balance favourably for everyone: the core reason for the present study.
Another thing those evaluation studies could not show was whether we could expect teachers to use the ATOM materials beyond the end of the project and its special motivations for the participants. The educational benefits seem to be sufficient to warrant this, but not enough to provide an overwhelming reason by themselves regardless of costs and other issues. This would depend upon whether teachers found them of overall benefit. Another kind of study was needed to investigate these issues.
From this point of view of time costs, there are three types of ATOM. In the first group described below, a remote expert was actively involved in the delivery and this was a cost not incurred in other ATOMs. In the second group, students interacted between institutions, incurring extra coordination costs, and implying that deliverers at both institutions were involved simultaneously. In the third group, there was no dependency during a delivery on people at a remote site (although there were dependencies on remote resources such as web sites). This grouping is only for the purpose of bringing out types of cost; in a classification in terms of pedagogical method, for instance, the CSCLN and remote presentation ATOMs would be grouped together as being based on teachback.
The CBL (computer based learning) evaluation ATOM concerned teaching students how to perform an educational evaluation of a piece of CBL. The students had to design and execute such an evaluation, and write a report that was assessed by the local deliverers. The interaction with the remote expert was by two video conferences, plus some discussion over the internet (email and a web based discussion tools). The UAN, ERMIA, and Statechart ATOMs each concern a different notation for specifying the design of a user interface. These ATOMs each revolved around an exercise where students, working in groups, had to specify an interface in that notation. These solutions were submitted over the internet, marked by the remote expert, and returned over the internet.
The CSCW ATOM is a group exercise, which involves students working in teams assembled across two institutions to evaluate an asynchronous collaboration tool (BSCW is suggested). They first work together using the tool to produce a short report related to their courses, and then produce evaluation reports on the effectiveness of the tool. The formative evaluation ATOM takes advantage of the fact that students at one university are required to produce a multi-media project as part of their course. In this ATOM, students from a second university are each assigned to one such project student, and perform an evaluation of that project, interacting with its author remotely through email, NetMeeting, and if possible video conferencing. The ATOM on remote student presentations is a form of seminar based course, where students take turns to make a presentation to other students on their reading, and are assessed on this. In this ATOM, these presentations are both to the rest of their class and, via video conference, to another class at another university.
The CSCLN (Computer Supported Cooperative Lecture Notes) ATOM required students to create lecture notes on the web that accumulate into a shared resource for the whole class, with one team assigned to each lecture. There was no role for a remote expert. The website evaluation ATOM involves the study and evaluation of three web sites on the basis of HCI and content. Students complete the exercise on their own, over the course of a week, and submit and discuss their evaluations via Hypernews. In the website design ATOM students work in groups to produce a web site; there is no remote collaboration. In this ATOM, as in fact in all the ATOMs in this group, the exercise could be reorganised to involve groups split across sites, as in the previous group.
N.B. in the case of the ERMIA ATOM, the interviewee gave a single time (24 hours) for finding resources and authoring combined.
A second problem is that of accuracy in the sense of comparability (not systematic underestimation) of the times given. Many respondents noted that it is very hard to estimate "time" in this context. The time it takes to do something, such as physically type in an ATOM description, may not have much relationship with elapsed time -- from, say, the original outline of the ATOM to the final, usable, version. People also mentioned "thinking" time and "research" time, e.g. "Do I count an hour in the bath spent thinking about it? An hour at home? An hour in the office?", and "I regard that as a free by-product of research thinking!". Nevertheless, every person interviewed felt able to give rough estimates. These vary very widely, e.g. from 0.5 hour to 24 hours for writing an ATOM description, so it may be that the implicit definition of "time" was indeed different between respondents.
The research aspect of MANTCHI will have increased the time costs because of the need to monitor various details, both at the time and in retrospect. This "costing" exercise has added yet a further hour for each of the dozen or so lecturers involved.
Authoring in general often has a significant element of iterative design to it, meaning that first draft authoring is often quite cheap, but we need to allow for the cost of revisions after a delivery or two, as these are really part of the process. Thus the authoring column in the table represents an estimate for a teacher considering joining in an ATOM exchange, but should not be compared directly with authoring times in other media (such as textbook writing) where some revisions would be part of the author's work. Conversely, the "revision" column combines (confounds) revision work that increases the quality with revision work simply to adapt the written material for a new occasion (e.g. replacing times and places in handouts, modifying URL links). N.B. the times for the two ATOMs not yet delivered are estimates of the latter time for adaptation. Future work should distinguish these two types of revision.
In summary, in general authoring might have four related categories: creativity or design, collecting resources to be used, actual writing, revision of the content in the light of the material having been used.
Time for marking is proportional to the number of solutions marked. The table gives the time per solution, which must be multiplied by the number of solutions in any attempt to predict times for future deliveries. This is one of the biggest issues in agreeing an exchange involving a remote expert: if class sizes are very different, marking loads will be different. There are several possible ways forward: different group sizes might mean the same number of solutions to mark even with different class sizes; a remote expert might just mark a sample with exemplary feedback, leaving the rest of the marking and commenting to be done by the students and/or local deliverer.
A related point to note is the use of groupwork in most of the ATOMs, since each group only produces one solution to be marked: thus the fewer the groups, the less time needed for marking. However larger groups certainly make it harder for students to agree meeting times, and may be less effective in promoting individual learning. Thus there is probably a strong tradeoff between learning quality and teacher time costs here, which is even more obvious in cases where the student work never gets marked at all or the feedback is of low quality.
One issue that emerged from the interviews was that time is not a currency with a fixed value. Time spent or saved at a period when other pressures on time are high is more valuable than at times of low pressure. Thus being able to serve as a remote expert at a low pressure time in return for getting the services of a remote expert at a high pressure time could be a very "profitable" trade ("The ATOM fell at a time when I was very busy, so having X do the marking was very useful"); while the converse could be disadvantageous even if the durations involved seem to balance. ATOMs move time costs around the calendar, which may be good or bad for the participants.
Another issue is also important: how difficult an activity feels to the teacher (perhaps measurable in terms of their confidence about performing the activity). The fundamental advantage to the trade behind ATOMs is that a remote expert usually feels it is easy to field student questions on the topic (or comment on unusual solutions), while the local deliverer would feel anxious about it. The time spent in each case might be the same, but the subjective effort, as well as the probable educational quality, would be significantly different.
A kind of cost not visible in this study is the groundwork of understanding what an ATOM is. For project members, this was done at a series of project meetings and perhaps in thinking about the ATOMs they authored, and is probably missing from all the time estimates. If new teachers were to use the ATOM materials, this learning of the background ideas might be a cost. On the other hand, this is comparable to the costs of any purchaser in gaining the information from which to decide whether to buy: a real cost, but often not considered. With ATOMs, a teacher would have to learn about ATOMs before making the decision whether to "buy in" at all, rather than while they were using them.
Finally, there were clearly costs in using some of the technology e.g. setting up video conferences, getting CSCW tools to work. As noted below, much of this can be written off as a cost of giving students personal experience of the technology as is appropriate in the subject area of HCI. This would not apply if the ATOM approach were transferred to another subject area, but retaining those communication methods. However it is also true that such costs will probably rapidly reduce as both equipment and staff familiarity with it improves.
The role of a remote expert will depend upon where on the Perry spectrum a course is pitched. It is however well suited to an approach where students are expected to be able to learn by reading and by attempting to do exercises, but will benefit from (expert) tutors' responses to their questions and examples not directly covered in the primary material. From the teachers' viewpoint, the interviews indicated that the most important function was to deal with those unexpected questions and to comment on student solutions to exercises (which also requires the ability to judge new variations).
It might be nice for local deliverers if they had a remote expert to give lectures, do all the marking, and so on; but the most valuable actions are probably to give some discussion and to answer questions after students have done the reading or heard a lecture, and to give comments on the good and bad features of student solutions (even if they do not decide numerical marks).
Having said that, any future study should still continue to look for, and expect to find, new categories of time and other costs and benefits. Future studies should repeat and extend the interview approach of this study in order to do this. Furthermore comparative studies of other teaching situations would be illuminating, as little is known of how higher education teaching work breaks down into component activities.
Finally, the point of cost benefit studies is to support, justify, and explain decision-making: in this case, whether it is rational to join an ATOM scheme for collaborative teaching. The actual measured costs and benefits are just a means to that end. It would therefore be valuable to do some direct studies of such decision-making, for instance by interview or even thinkaloud protocols of teachers making such decisions. That might well bring up more essential categories that need to be studied.
From the authoring viewpoint of creating exercises and the associated handouts (assuming authors use their own material), the author receives materials they do not have to write in exchange for improving something they have or would have already written. The advantage shows up in clear time savings (at least 3 for 1 for a teacher who authors one ATOM and adopts two), in lower subjective effort, and in higher quality topics (i.e. a gain in curriculum quality).
A further potential gain comes from the fact that each exercise will be re-used more often, because it is used at several institutions, than it normally would be. This reduces the authoring cost per delivery, and will often lead to higher quality as feedback leads both to revisions and to the use of stored past solutions and feedback (a feature of MANTCHI not dealt with in this paper).
From the viewpoint of the local deliverer as a course organiser, adopting an ATOM is less work than creating one's own. It is less stressful because its quality is supported by an expert author, and by having been trialled elsewhere, and because its delivery may be supported by a remote expert. Above all, it gives a higher curriculum quality. Within the project, teachers dropped the topics they least valued on their own courses and requested or selected ATOMs from others that they felt most increased the value of the set of topics on their course.
From the viewpoint of the work of local delivery itself, there are three cases. In ATOMs without any use of a remote site, the work is the same. If a remote expert is used, then local deliverers donate some tutor time on a subject they are highly confident about in return for the same time received on a topic they have low confidence about. In contact time, this may not be a saving as some local deliverers will attend as facilitators for the occasion. However such "contact time" does not require the preparation it normally would. For marking, there is a negotiated balance, so no time should be lost. In ATOMs with remote student interaction, there is an extra cost of coordinating the courses at two institutions, which has to be balanced against the pedagogical gains of this form of peer interaction for students, and any relevant gains due to practise with the technology involved.
Thus in return for savings in authoring time, and a gain in curriculum quality (better set of topics covered) and also the quality of individual materials (more often tested and improved, accumulation of past student work as examples), there is either no penalty in net delivery time, or some increase in time spent facilitating (as opposed to more active interaction), with the added staff development reward of the deliverer becoming more confident of their expertise in delivering this material in future without remote support.
Our studies suggest however that the main gains are in improved curriculum content (replacing less valued topics by ones judged to make for a better course content), and in staff development (expanding the range of topics a teacher is confident of delivering). In addition, the cost benefit analysis (part D above) suggests that there are clear net gains in teaching collaboration organised on our model.
The results of the MANTCHI project, supported by our evaluation studies, might be listed as:
Date: 9/3/98 Gender:
M
F
Age: .....Matric. Number:............
Degree:
M.Sc.
B.Sc.
B.Eng. C&E
B.Eng.ISE
The following questionnaire is concerned mainly with your use of the UAN
& Cognitive Walkthrough ATOM and other learning resources during your
course. All answers are confidential and will not be attributed to individual
students. Your matric. number is only required in order to link this
questionnaire to any future questionnaires. Thank you for your participation
in this project.
Dr. Margaret I. Brown, Dept of Psychology, Adam Smith Building, University of Glasgow, Glasgow, G12 8QQ. mag@psy.gla.ac.uk MANTCHI: http://greenock-bank.clyde.net.uk/mantchi/ |
Q.1. Which Formalisms have you already studied on this or
previous Formal Methods Courses or in your own time? (Only tick UAN and
Statecharts if you studied them before 1998)
UAN
ERMIA
Statecharts
BNF
State Transition Diagrams
Petri Nets
Event CSP
Object Z
Other
(Please name)
Q.2. How easy do you find it to understand the underlying concepts behind Formalisms and their use and application ? Please indicate on the scale of 0--4 below.
Q.3. Please indicate by ticking the relevant box how confident you feel that you are able to complete the following objectives.
a. Comment on any effects the Group work (including discussions with
other Groups had on your reported confidence levels for the
objectives.
b. Comment on the use of Group work as opposed to individual work in
your University courses. (e.g. benefits and disadvantages: indicate group size
referred to).
Q.4a. Learning Resources/Activities. In the table below,
mark each resource/activity you used while learning about UAN &
Cognitive Walkthrough. If used, please mark how useful you consider each
resource or activity was to you in learning and understanding UAN &
Cognitive Walkthrough and also its use and application. Please give reasons for
your answers. (Remote Expert = Phil Gray: GU)
Resource |
Usefulness of Activity / Resource |
||||||||||
Q.4b. If you considered any of the resources "not very" or "not at all
useful" what did you do, or what will you do to compensate?
Q.4c. Will you look at the submissions from Glasgow University students
when they become available as an additional ATOM Resource?
YES
NO
How useful do you consider they will be as an additional learning resource?
Q.5. Time available to use resources or perform activities for the
UAN & Cognitive Walkthrough ATOM Please list the resources that you
consider you had:
a) not enough time to use b) too much time allocated for their use
Refer to the resources listed in Q.4. You can state the resource number
instead of the name of the resource.
insufficient time for use |
allocated too much time: |
Q.6. Did you experience any difficulty gaining access to any resources /
activities during the use of the UAN & Cognitive Walkthrough ATOM?
YES NO
Please list the resources/activities explain the problems. (Refer to the
resources listed in Q.4. and also to the labs, computers, software, technical
help etc. You can state the resource number instead of the name of the
resource.)
Q.7. Did you have any problems when doing the following:
Q.8 a. Professor Kilgour provided paper-based copies of the ATOM
Resources (except Toronto nformation)
Had you already printed the ATOM information and scenario from the Web?
YES
NO
b.Did you use the information/reference papers in paper-based and/or
Web-based form?
Paper-based form
Web-based form
Explain:
c. How and where do you download and print information from the Web?
Q.9.
a. Was the "level" of the material in the ATOM right for
you?
Indicate on the scale of 0 -- 4 below.
Q.10 a. How much did you benefit by taking part in an exercise with
input from a remote expert
(The UAN & Cognitive Walkthrough ATOM)?
Indicate on the scale of 0 -- 4 below.
b. Please list the benefits and any disadvantages of the
collaboration.
c. Were any benefits or disadvantages unexpected? YES NO
Q.11.What do you consider contributed most to your understanding of the
underlying concept of UAN & Cognitive Walkthrough?
Q.12. a. Do you consider that you still require extra
information/training to help you learn/understand/apply UAN & Cognitive
Walkthrough?
YES
NO
b. Explain the type of extra information/training you consider you require.
c. What else would have helped you to complete the ATOM tasks?
Q.13. You have started studying ERMIA by accessing and using the MANTCHI ERMIA ATOM. Have you any comments on your use of this ATOM so far?
Q.14. Please give any other comments on your use of MANTCHI ATOMs and the use of other Web-based ATOMs and on-line discussion forums on University Courses in the future.
Q.15. Have you any other comments?
Doughty, G. (1996b) "Making investment decisions for technology in teaching" (University of Glasgow TLTSN Centre) [WWW document] URL http://www.elec.gla.ac.uk/TLTSN/invest.html
Doughty,G., Arnold,S., Barr,N., Brown,M.I., Creanor,L., Donnelly,P.J., Draper,S.W., Duffy,C., Durndell,H., Harrison,M., Henderson,F.P., Jessop,A., McAteer,E., Milner,M., Neil,D.M., Pflicke,T., Pollock,M., Primrose,C., Richard,S., Sclater,N., Shaw,R., Tickner,S., Turner,I., van der Zwan,R. & Watt,H.D. (1995) Using learning technologies: interim conclusions from the TILT project TILT project report no.3, Robert Clark Centre, University of Glasgow ISBN 085261 473 X
Doughty,P.L. (1979) "Cost-effectiveness analysis tradeoffs and pitfalls for planning and evaluating instructional programs" J. instructional development vol.2 no.4 pp.17,23-25
Draper, S.W. (1997a, 18 April) "Adding (negotiated) management to models of learning and teaching" Itforum (email list: invited paper) [also WWW document]. URL: http://www.psy.gla.ac.uk/~steve/TLP.management.html
Draper, S.W. (1997b) "Prospects for summative evaluation of CAL in higher education" ALT-J (Association of learning technology journal) vol.5, no.1 pp.33-39
Draper, S.W. (1998) CSCLN ATOM [WWW document]. URL http://www.psy.gla.ac.uk/~steve/HCI/cscln/overview.html
Draper, S.W., & Brown, M.I. (1998) "Evaluating remote collaborative
tutorial teaching in MANTCHI"
[WWW document] URL http://www.psy.gla.ac.uk/~steve/mant/mantchiEval.html
Draper,S.W., Brown, M.I., Henderson,F.P. & McAteer,E. (1996) "Integrative evaluation: an emerging role for classroom studies of CAL" Computers and Education vol.26 no.1-3, pp.17-32
Laurillard, D. (1993) Rethinking university teaching: A framework for the effective use of educational technology (Routledge: London).
Mayes, J.T. (1995) "Learning Technology and Groundhog Day" in W.Strang, V.B.Simpson & D.Slater (eds.) Hypermedia at Work: Practice and Theory in Higher Education (University of Kent Press: Canterbury)
MANTCHI (1998) MANTCHI project pages [WWW document] URL http://mantchi.use-of-mans.ac.uk/
Perry, W.G. (1968/70) Forms of intellectual and ethical development in the college years (New York: Holt, Rhinehart and Winston)
Reeves,T.C. (1990). "Redirecting evaluation of interactive video: The case for complexity" Studies in Educational Evaluation, vol.16, 115-131.
Reigeluth,C.M. (1983) "Instructional design: What is it and why is it?" ch.1 pp.3-36 in C.M.Reigeluth (ed.) Instructional-design theories and models: An overview of their current status (Erlbaum: Hillsdale, NJ)
TILT (1996) TILT project pages [WWW document] URL http://www.elec.gla.ac.uk/TILT/TILT.html (visited 1998, Aug, 4)
Web site logical path:
[ www.psy.gla.ac.uk]
[~steve]
[mant]
[other formats]
[this page]