[ Teacher overview Teacher details Learner main page Course home page (sample) ]
Since evaluation is a procedure -- something you do -- then doing it is probably the best way of learning it.
And doing it is the best way of linking it to a student's personal experience: another good principle. This also suggests it might be most powerful if the student evaluates a course they are themselves taking, although it may be confusing to play both subject and evaluator in one situation.
In the first delivery, the local deliverers felt that the main value of the remote expert was in the second video conference, commenting on students' initial findings. However, it could be that if the first video conference was better timed so as to be when students had really read the assigned reading and created proposed designs for their evaluations, then discussing that would be a good use of the remote expert.
Firstly, deciding on the content of the exercise for this delivery i.e. what CBL delivery is to be evaluated. My preference was to have the students evaluate another set of learners taking some CBL material. This was deemed too hard to organise (it would require time-tabling constraints on two classes to be compatible, apart from anything else). It would have been realistic in that the subjects and clients would have been separate.
Instead, it was decided to evaluate some other item in the same course. This means the client teachers are the teachers of this course; and the subjects and evaluators are both the students. Good in some ways: they get to experience being both subject and evaluator, and they don't have to persuade their subjects to participate and address confidentiality issues.
Secondly, you have to schedule and book the video conferencing: to fit with class times, availability of the remote expert, and availability of the video conferencing network and studio.
(And in principle we could/should have planned when the remote expert would be responsive by email ...).
The exercise was presented by web pages to the students (and teachers).
We used both email and conferencing tools for discussion. However we failed to establish a firm structure for how these should be used as part of this ATOM. There was a notion that as part of the whole course, there would be a discussion topic each week led by the local teachers. This was not enforced by assessment etc.; and the remote expert's role with this was not laid down rigidly.
We organised 2 video conferences with the remote expert and the class: booked in advance.
Neither were great. The group evaluating the CTL module designed some good instruments (resource questionnaire + quiz) but didn't use them, ended up doing a course review style questionnaire + user interface evaluation on some software used. Didn't end up with recommendations.
Other group did more in the way of talking to the lecturer involved, did an email questionnaire of the form "Do you feel more confident about topic X after session", and a "focus group" interview (which reads more like individual questionnaires) on the use of a video in the teaching session.
Both reports are fairly coherent and sensible, apart from the feeling that they are "missing the point" a bit on integrative evaluation, and wanting to do summative evaluation.
In other ATOMs there was an issue of students not being notified when feedback was in fact posted on the web. The converse is also an issue: that a remote expert may not know when discussion is appearing on a web tool, particularly as for the class and local deliverers, that discussion is in a context that includes face to face conversations and reminders.
The argument for smallness is that this course, like many others, is mainly organised around weeks, with one work topic per week. That is why ATOMs are normally meant to be of a size to fit one week, allowing easy design of courses. If they grow, they make course organisation more difficult. The argument for bigness is really implied (though I didn't recognise this originally) by the rationale given above (in the Rationale section). The best way to believe in the virtue of evaluation is to do it yourself and experience the surprises. That entails an exercise that is unlikely to fit into a small week's work. The rationale came originally from a similar exercise on HCI evaluation. There, it is so important that it justifies taking a large part of student's time. It is not so clear that the same effort is necessarily justified in a CBL course: that depends on the deliverers' (the course organisers') values.
[ Teacher overview Teacher details Learner main page Course home page (sample) ]