2 weeks ago Ian Hart posted a message about "accountability": gathering evidence for the benefits of an IT investment in CALL. There hasn't been much discussion of it, apart from a comment from Ian to say the discussion has been "depressing". I have too much to say, but with too little ready structure, to have replied quickly.
What I have to say is really about accountability for CAL in higher education generally; you will have to judge for yourselves how much it applies to CALL and the situation Ian refers to.
First, here at Glasgow we have been thinking about how to tackle this issue. Gordon Doughty has developed a talk about this: the slides are here although I don't know how useful they are without Gordon talking to them.
One of Gordon's basic moves is to say that our measures are usually very crude, so we should (at least for now) consider simplified cases: 1) Cases where costs held about the same, but learning quality went up 2) Cases where learning quality was maintained but costs went down 3) Cases with both gains: costs down, quality up.
This gives me a clue. Here's what I think, divided into 3 points of increasing "depth", labelled C,B,A.
C) As I think someone commented, wondering after the event whether it can be shown to be worthwhile is too late to put good measures in place: a basic management error. In other words, if they wanted evidence they should have said so at the beginning not at the end: an error that has been made widely and at a national level here in the UK. There is a lot in this view, as I am increasingly convinced that measuring the costs is an intricate business. For instance, at least in HEIs (Higher education institutions) here in the UK, no-one keeps any real record of the time they put into teaching, so no-one knows what it really costs. We could find out, but only by developing detailed recording systems. But even then, the question is complex. For instance, is writing this email research, teaching, or a hobby? I certainly didn't decide that in advance. I am considering writing a research proposal in this area: so perhaps this message will get recycled as "research". ITforum is becoming a useful web resource that I have started to point students at: so perhaps this message will end up being "teaching". Or perhaps neither: so we'll know then it must have been a hobby activity.
But this ambiguity goes even deeper, I now think. To cost CAL, you might think you have to cost authoring and delivery costs separately. But is it so clear? A teacher, especially in HE, is really in the position of acting as an interpreter for learners of the literature. Delivering using a textbook still usually involves significant local mediation; writing one's own lectures unrelated to any text is another form of interpretation. And one interesting justification for research is that researchers act as human interfaces (or interpreters) for their organisation (private or public) of the whole world literature on that topic. (I discuss this in another document.) So there really may be no particular place to draw the line, so costing teaching in general and CAL in particular is going to be difficult.
B) On the basis of my recent experience I think there is a still deeper problem: whether gains in costs or benefits from using CAL occur often depends very strongly on whether those doing it are aiming for those gains or not, not on the merits of the technology or even on other features of the situation. For instance, much CAL has been introduced by enthusiasts: their real motivation was to see what it would be like, and to get experience. Often that is what they achieve, but nothing else i.e. learning quality is maintained, costs go up (extra hardware and huge authoring costs), but it is a "success" and they get the experience they wanted. It could be, from Ian's description, that that is really what went on in his colleague's case: for all the "action research" etc., it could be that they wanted to introduce the technology and organised it to maintain quality, but not to do anythng else: and that is what they achieved.
On the other hand, it is possible to for increasing quality. I argue this in my "niche" paper. Basically, if you want to increase quality, then you should begin by analysing what the weakest point currently in the overall provision is, decided how to remedy this bottleneck, then do so POSSIBLY using IT. I have seen a few cases like this, with learning gains that stand out despite crude measurements. One was in CALL, and the main evidence was that all the course examiners including crucially the external examiner (a reviewer from another institution), who were in a position to see longitudinal changes across the change in teaching method, say the improvement is large and real in their judgements as well as in marks. Note how different that is from a strategic decision to install computers, or a hobbyist decision to get into CAL. That is what it is like if the implementors are really aiming for a learning quality benefit regardless of anything else.
On the third hand, it is quite possible to go for reducing costs e.g. by replacing lectures by CAL. This has been done, and always succeeds: it reduces staff hours teaching. Of course the quality often collapses too, but with care, that can be retrieved while maintaining worthwhile cost savings. Gordon himself has done this. (In his case, replacing a maths course by software, the first year they saved 100% contact hours, but with a failure rate that was too high, as it had been before. The next year they re-invested staff hours targeted at the (new) weak spot, and ended up still with 75% cost saving, but now higher quality than before the CAL as measured by final exam performances.)
But in many cases I have observed, staff do NOT save time and money. They introduce the software, then hang around making sure it works. This also makes sure no resources are saved, though new ones are consumed; and in fact sometimes there is evidence that their presence actually reduces quality (students ask staff instead of working things out in peer discussion). So in my view, teachers must consciously want to save costs and have a plan to do that, in order to achieve savings; and similarly, they must consciously want to improve quality (not just maintain it up to some acceptable standard) and have a plan to do that, for it to improve significantly. But very many uses of IT in education are not done with that motive in the teachers: instead they are "trying it out" while otherwise maintaining the status quo in both quality and expense of their time.
So if anyone, management or staff, want gains, then they must have projects designed from the start to make gains; not projects to "use IT", introduce "innovation", etc. etc. Otherwise most people's local management actions will maintain the status quo standards of learning and expense e.g. they will hang around the machines to make sure it works OK: in other words, they will spend their "normal" amount of time and effort, and they will make adjustments to recapture the usual quality level rather than working for a major increase.
[So in Ian's colleague's case, I think it quite possible that no major gains were made. On the other hand if they were, then it should be easy, given the will, to test this. If vocabulary tests are thought (sensibly enough) to be too shallow, then get an outsider to give oral conversation tests on two groups of students and compare the marks. Ian could do it, for instance; and he could grade each student on several dimensions. And he could be "blind" to which group each student was from.]
A) Still "deeper" however is whether this management demand for "results" is coherent. If it is asking for evidence of benefits, that will be interpreted as whether the computer network was worth the money, then it is incoherent. It can only show whether the hardware plus software plus teacher actions brought benefits. If there were no benefits, there will still be no evidence about which of these 3 factors failed. And it may well be that what was missing was the right management: a conscious plan to gain benefits of either costs or learning quality or both. It seems to me that it is extremely likely that both management now and staff then had a stupid though implicit model that you just install the computers and benefits will somehow appear automatically. It is as if you though vaccinations might be good, so you bought case loads of doses and left them lying around, then tried to measure health improvements a year later. You see no improvements, so you conclude it was a waste of money. Instead, you should consider a) whether you had a health problem for which vaccination was a sensible solution (what evidence of the problem did you have?) b) whether the vaccines were used sensibly. Simply taking outcome measurements doesn't answer either of these questions.
Steve Draper