This issue is the role of open ended observation (OEO) in evaluation (as opposed to fixed quizzes and questionnaires), which we believe to be vital and is always an important part of our studies. We have, however, been asked how we could know when we have done enough, or whether we have noticed the most important issues in each given case? We will discuss and attempt to justify our apparently unstructured use of OEO by giving example cases of four kinds: ones where our preconceptions were confirmed by OEO, ones where they were disproved, ones where OEO directed how we then analysed data from structured measures, and ones where OEO alerted us to completely new issues affecting the whole study. Such cases convince us of the importance of OEO, but in the end we may not be able to know whether we have done enough.
Similarly with evaluation: it is important to cover both functions. Methods such as exam-type tests and questionnaires with fixed response categories will never warn you that something you did not anticipate is in fact important in the situation you are studying. Hence it is vital always to have some open-ended questions and preferably personal observation by the evaluator. However, open-ended questions and observations are not a substitute for fixed questions: only by putting the same question or task to each learner and requiring the answers to be expressed using the same categories (or marked using the same coding or marking scheme) can you get comparative results that allow you to discover and report results such as what proportion of learners were affected by an issue.
Any evaluation study, then, should have both open-ended measures for detecting
surprises, and fixed measures for generating comparative data that can answer
specific questions. Without fixed measures you may not be able to say anything
definite about the courseware: only an unstructured set of observations and
opinions from individuals, which may or may not be shared by the other
learners. Without open-ended measures you have no chance of detecting problems
or anything you did not think of in advance, and it is from the unexpected that
most important improvements stem.
Open-ended observation is a descendent of the approach of "Illuminative
evaluation", a term introduced by Parlett & Hamilton (1972) to denote an
observational approach inspired by ethnographic rather than experimental
traditions and methods. (See also Parlett & Dearden (1977).) We have used
various kinds of open-ended observation in our studies, such as personal
observation, some video recording, interviews and questionnaires that include
open-ended questions, and focus groups. However we have typically spent much
less effort than an ethnographer would, spending hours, days and months
"hanging out" in the relevant situation as well as interviewing "informants".
How then can we judge that what we do is enough? How can we know if we have identified all the issues, or at least the important ones? This paper addresses this question, mainly by looking at a variety of specific instances in which we have learned something from open-ended observation.
It is true that what you perceive is quite strongly affected by what you want to see: if you look out for someone you are much more likely to see them. But we also know that we can be surprised, that we can notice the unexpected (a tiger walking down the street). It is this familiar possibility that makes open-ended observation worthwhile. [But can we say anything more specific about it?]
This applies to both open-ended observation and to comparable measures. That is, we can be surprised by comparable measures such as the outcome of an experiment; but can also be surprised by quite other things. However it is not talked about in discussions of experimental methodology: experiments are designed to achieve comparable measures, even though often they are occasioned by unexpected observations; and also, they are often contrived only by close attention to informal observation that tells you how to structure things, to control variables ....
Draper,S.W., Brown, M.I., Henderson,F.P. & McAteer,E. (1996) "Integrative evaluation: an emerging role for classroom studies of CAL" Computers and Education vol.26 no.1-3, pp.17-32 and Computer assisted learning: selected contributions from the CAL 95 symposium Kibby,M.R. & Hartley,J.R. (eds.) (Pergamon: Oxford) pp.17-32
Parlett, M.R. & Hamilton,D. (1972/77/87) "Evaluation as illumination: a
new approach to the study of innovatory programmes".
(1972) workshop at Cambridge, and unpublished report Occasional paper 9,
Centre for research in the educational sciences, University of Edinburgh.
(1977) D.Hamilton, D.Jenkins, C.King, B.MacDonald & M.Parlett (eds.)
Beyond the numbers game: a reader in educational evaluation
(Basingstoke: Macmillan) ch.1.1 pp.6-22.
(1987) R.Murphy & H.Torrance (eds.) Evaluating education: issues and
methods (Milton Keynes: Open University Press) ch.1.4 pp.57-73
Parlett, M. & Dearden,G. (1977) Introduction to illuminative evaluation: studies in higher education (Pacific soundings press)