Last changed
24 Nov 1999 ............... Length about 3,000 words (20,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/log.html.
You may copy it. How to refer to it.
Web site logical path:
[www.psy.gla.ac.uk]
[~steve]
[grumps]
[this page]
By
Stephen W. Draper
Department of Psychology
University of Glasgow
Glasgow G12 8QQ U.K.
email: steve@psy.gla.ac.uk
WWW URL:
http://www.psy.gla.ac.uk/~steve/
My notes prompted by
the Revelation workshop on 17 Nov 1999, and subsequent conversations with
Richard Thomas. This now has
many points from him.
The topic here is ICT data logging for educational aims. (I'll refer to
this as ICTDL.) ICT means information and communications technology
(i.e. computers and networks). Data logging means collecting automatically
data on use or messages; storing large quantities of it; perhaps aggregating
over many times and computers i.e. distributed collection over networks; and
finally processing and reporting on it. The context is education: so the data
will be on educational usages. And the question is: would this be of any
educational use i.e. could it somehow improve learning outcomes?
Only four specific examples of ICTDL for educational benefit were mentioned at
the workshop in my hearing:
- In the CSE (common student environment: the public clusters providing basic
ICT services for undergraduates): aggregating logs of usage to show, and
feedback quickly (over a few days) to students, the times when each cluster is
busy / or likely to have free places.
- Richard Thomas,
in the Sydney study (Kay & Thomas, 1995), aggregated logs of
patterns of command usage (in an introductory course on an editor). This led to
teachers modifying the course next year in line with what functions actually
got used.
- Ideas I got in talking a few years ago with Martin Gilbert on the idea of
lecture theatres with multi-button data entry devices at each seat (handheld or
wired in):
- Do MCQ tests: teacher puts up question(s) on an OHP, students answer via
the buttons. MCQ = multiple choice questions (i.e. those with answers being
restricted to choosing one of a few discrete alternatives).
The aim might be:
- to tell the teacher how well the class (i.e. his teaching) is doing up to
then;
- or it might be to give each learner practice work with feedback;
- or it might be to tell each learner how well they have understood the
material so far.
This last role is that of SAQs: self assessment questions. These are
standard required practice in Open University materials and textbooks. They
are self-marked by the learner, and allow them to judge whether they need to
work more on this topic or not.
One particular application of this is the "pre-lecture": the first lecture of
a block is given over to an SAQ test on the assumed pre-requisites for the
block; partly to "warm up" the concepts in every students' mind, but partly
so that students self-diagnose any remedial learning they need. The rest of
that session then offers tutor and peer support for that remediation. Doing
this at the start of a block is important in determining how much the student
can in fact get out of the subsequent teaching.
(See appendix, and Sirhan, 1999.)
- Get feedback on the lecture on the spot. Teacher puts up question(s) on an
OHP, students answer via the buttons. But this time, the questions are not
tests of subject matter, but items like "Am I going too fast, too slow, just
right?". Instead of one course feedback questionnaire per term, the teacher
could get one or more per lecture.
Both of these might be done with or without asking (and remembering) what the
department or previous courses of that student is: so that strengths and
weaknesses of either learning or attitudes to the teaching
could be associated with educational (disciplinary) history.
- In a staffed computing lab. class, or any other subject taught through
computers with built in exercises, instant monitoring of such test points
allows the staff to seek out the most backwards students for immediate
attention. Makes much more effective use of tutors, and any errors in the data
or in inferences from the data would be immediately corrected in the resulting
face to face exchange. Equally, identifying the fastest students could be
important; either to offer them more challenges, or else to recruit them as
peer tutors for the other students. Both are effective in giving them (the
fastest students) more benefit from the course.
It's a grossly technology-driven initiative, and the track record of that is
poor (Draper, 1998). There is also a long history (80 years, say) of
technology claiming but failing to revolutionise education (Cuban, 1986).
On the other hand, we could try to identify or imagine, and then exploit,
genuine novel educational advantages of the technology:
- Privacy: if only aggregate scores within a classroom are
used/stored/displayed, then students can give anonymous on the spot feedback to
the lecturer, while all can see if this is widely agreed or just a few unusual
students. While all the comments at the workshop were about protecting the
data against privacy problems, in fact the technology offers a major
improvement (not decrease) in privacy compared to shows of hands or face to
face questions and comments.
- Instant aggregation:
students could self-mark their tests, but also see how their mark compares to
the aggregate class mark distribution. The aggregation / relative mark is the
(only) value-added bit. It also applies to feedback questionnaires:
instantly aggregating the data, and supporting multiple choice rather than
binary responses better than shows of hands do.
- Participation reminders.
Logging can detect/notify participation, like taking a register or noticing
student numbers and/or individuals in class. The advantage is in extending this
function to distance education; and also in campus education, in automating
this, and so making it easy to do more of it. E.g. supplying attendance
numbers to QA processes, seeing how many have actually accessed the web site
for this week's assignment, etc.
- Collaborative filtering?
Think about "collaborative filtering" (as at amazon.com): extracting
advice for users from stored logs of behaviour patterns.
Linton (1999) has a paper on recording people's usage of Word commands (their
actively used command sets), and transmitting that to other users as implicit
suggestions. For students, show them "enrolment options" i.e. what
combinations of modules other students sign up to (particularly valuable at
University of Glasgow for level 1 options).
- Possible applications of data analysis:
- Collecting data on realistic study times for each activity on the course.
(E.g. if assignment announcements are done electronically or are otherwise
known; and if completion / hand-in times are recorded.)
- Error analysis: analyse last year's results to identify points that need
better teaching or other attention this year.
- Similarly, identifying poor performance on particular test items/topics,
and detecting correlations e.g. students who do badly on X also have trouble
with Y. And relating that again to past course enrolments.
- Year to year comparisons: things can change for ill as well as good in a
repeated course.
- Feedback on meta-cognitive performance could be generated for
students giving rates of learning and how they compare with other students and
users. This might give grounds for encouragement and also help to set
realisitic learning goals. It might also be possible to perform simple plan
recognition to see if sensible combinations of commands and wildcards have
been deployed, thereby hopefully prompting more 'strategic' use of the
application in Bhavnani and John's sense of the term.
I think there are 3 broad types of application for such ICTDL:
- Instant use within classes (time scale of a minute). Don't store it, or
transmit it; but use it for privacy feedback, and for adding aggregation to
SAQs in class. [Key issues:
- the data entry hardware e.g. handheld devices.
- Projection display of results, so all get them.
- The design of SAQs (cf. "pre-lect" technique in course design), and of feedback questions.]
This is using ICT to enhance what can be done with a face to face meeting.
An extension of it for distance learning, to give a better sense of
engagement: let them see how they are doing in relation to the community
(class of distance learners). Of course, this would also be useful for demos
of survey techniques (e.g. in HCI and psychology lectures).
- Usage monitoring (timescale ~ 3 days)
- Resource allocation.
As in the CSE example above, publish when the high and low usage times are in
order to smooth out resource use. E.g. library checkout desks could be
monitored like this, and peak times published. An extension of this is to
monitor what software is used, and use this to prioritise maintainence and
training (fix what matters to the most people).
- Reminders, and notification of usage.
Tell learners (and teachers) about peak times (crowding in clusters, server
overload); check for who isn't logging in and chase them up by phone;
automatic reminders about imminent distance learning deadlines. Really this
is a much broader application type: e.g. setting up a distributed document
such as the course handbook produced by a university department so that the
dozens of people who must contribute to it are automatically reminded to check
and update their elements of it (cf.
Draper, 1997).
- Models of expertise over long time frames, as in Richard Thomas' work.
This applies to computing as a subject matter e.g. the IT skills course for
all students. But also it could be extended in all courses using ICT to catch
incipient dropouts on the basis of their behaviour (or lack of it) compared to
statistical models of themselves and others. Some of the examples above of
data analysis refer to this long timescale.
But usage monitoring on longer timescales is also important, though may be
difficult to automate, in order to notice what new software is being used.
For instance, some universities have had big battles when central support
becomes out of touch with the software being used in reality on students'
private machines at home: clearly coordinating this usage is important, and is
NOT something that can be decided by the university: which on the contrary
must fit in with what the students use if any real use of ICT in home learning
is to happen. An issue of this kind now starting to erode basic communication
is that students do not in general seem to be able to forward email
from university accounts to the hotmail accounts some "really" use: with the
result that communications from teachers are not read regularly or at all.
Integration of private and university ICT is needed, and ICTDL could guide the
strategic management of this.
The workshop spent most of its directed effort around the question of
problems of communication between technologists and educationalists. This
presupposes that they need to. If they do, then as someone said in the end you
probably just need to find/create people with training in both.
However this seems to presuppose that ICT for education is like designing a
bank's workflow system, where everyone's job will revolve around the
technology. But most ICT use in education uses off the shelf stuff: so the
adaptation is all done by the educationalist.
The one really important requirement for technologists to grasp is "pace": the
speed with which the artifact can be modified (by the teacher, without special
equipment or training): that is why OHPs displaced 35mm slides, why face to
face lectures/classes persist: they are all part of supporting instant
adaptation of teaching to specific students. The idea that needs are fixed at
the requirements stage, several years before students get to the software,
seems enormously stupid, and doomed to educational failure. Or rather, such
software can only be like textbooks: fixed lumps around which the educational
adaptations are made by others.
Having said all that, we could try to spot where/whether/if there are ANY
distinct educational advantages in ICTDL: as discussed above. Packages for
these might be useful, if they were as easy to learn to use as an OHP is. Then
it would just take about a decade for the university to install the equipment
in every teaching room.
CSE = Common Student Environment: the public clusters providing basic
ICT services for undergraduates at the University of Glasgow.
ICT means Information and Communications Technology
(i.e. computers and networks).
ICTDL = ICT Data Logging (for educational aims).
MCQ = Multiple Choice Questions (i.e. those with answers being restricted
to choosing one of a few discrete alternatives).
Revelation: an ICT project
(see here).
SAQs: Self Assessment Questions. They are self-marked by the learner,
and allow them to judge whether they need to work more on this topic or not.
On 3 Nov 1999 I attended a talk by Ghassan Sirhan on interventions in a
general introductory chemistry course at this university. (I think this work
is in a dissertation currently being examined.) The analysis divided students
by their prior chemistry qualifications, which were of widely varying
standards (including none at all). Without the interventions, substantial
group differences in attainment were shown i.e. prior qualifications predicted
attainment in the course. With the interventions, this difference was
eliminated. This is of course striking: most interventions you would expect
to improve all students, thus moving all group means up and preserving some
difference unless the improvements were so great as to hit "ceiling" with near
perfect performance. The interventions were inspired by Ausubel, and
essentially supported students in checking whether they had the prerequisite
concepts and tools, and then supported them in remedying the gaps they had
identified in themselves. One technique was "pre-lectures": using the first
lecture in each block to do a self-assessment test, with tutors and peer
support present for immediate followup on request. The other was a large set
of sheets of so-called "organisers" on key problem types, each with an example
problem, the concepts needed to address it, a solution strategy, the
solution.
This approach is (I think) attacking what is otherwise a widespread weakness
in teaching here (and perhaps elsewhere), of just assuming students have
prerequisite knowledge ready and active in their minds for a block of
teaching, and instead actually checking, activating, and remediating it as
needed. As Ausubel said, the biggest determinant of learning is what the
learner already knows (or doesn't).
Ausubel, D.P., Novak, J.D. & Hanesian, H. (1968 / 78) Educational
psychology: a cognitive view
(Holt, Rinehart, & Winston: New York) epigraph:
"If I had to reduce all of educational psychology to just one principle, I
would say this: the most important single factor influencing learning is what
the learner already knows. Ascertain this and teach him accordingly."
Cuban, Larry (1986)
Teachers and machines : the classroom use of technology since 1920
(New York; London : Teachers College Press)
Draper, S.W. (1997) The problem of Departmental Web pages
[WWW document]. URL:
http://www.psy.gla.ac.uk/~steve/webdesign/web.html
Draper, S.W. (1998) "Niche-based success in CAL" Computers and
Education vol.30, pp.5-8 [also WWW document]. URL:
http://www.psy.gla.ac.uk/~steve/niche.html
Kay J. and Thomas R.C. "Studying long term system use" Communications
of the ACM Special Issue on End-user Training and Learning, Vol 38, 7,
pp.61-69, July 1995.
Linton,F., Deborah Joy,D. & Schaefer,H.P. (1999)
"Building user and expert models by long-term observation of application usage"
UM99 User Modeling: Proceedings of the seventh international conference
(Springer-Verlag: Wien, Austria) pp.129-138
Sirhan,Ghassan (1999) Dissertation in preparation?
See appendix. And contact him at:
9708212s@student.gla.ac.uk and at the
Centre for Science Education
Richard Thomas (1998)
Long Term Human-Computer Interaction : An Exploratory Perspective
(Springer Verlag).
Web site logical path:
[www.psy.gla.ac.uk]
[~steve]
[grumps]
[this page]
[Top of this page]