Last modified 3 March 1998
This page is part of Human Computer Interaction module on the MSc(IT) at Glasgow University.
Experiments are a maximised precision of measurement of performance of design and user "friendliness". They accomplish this by measuring such areas as time and errors, how long subjects took to accomplish certain tasks and how many mistakes they made.
The example of an experiment given in the lecture was the evaluation of two rival spreadsheet designs: Excel and Wings. It also involved two types of users - those who had never used the package and those who had.
The outcome was that there was no significant difference between the spreadsheets
but there was a difference between first time users and experienced users.
Therefore more than half of the effort was equivalent to common learning.
Before undertaking an experiment: Firstly,the issue to be studied has to be identified and clearly defined. Secondly, the subjects have to be chosen.
The latter is crucial to the outcome of any experiment. The sample group should be chosen to match the intended user group as closely as possible. The use of a sample group is to allow an estimate of average behaviour. (For further information see Human Computer Interaction A Dix et al, page 375 "Subjects").
Experiments are piloted first of all. This stage can determine whether the time and effort needed by the designer have been estimated accurately or whether or not the task is going to prove too long or not long enough. Planning is crucial to the success of an experiment.
Throughout the experiment the participant is on the spot and it is carried out under controlled conditions. The positive effect of this is that it can achieve more comparable measurements. However, the negative effect is to lose some information as the participant may not be as natural.
The choice of method depends on various factors such as:
Methods of evaluating an experiment are:
However, we have to consider the properties of the human memory during this latter method:
(See Lecture handout 4 page 7 for further clarification).
They are not appropriate in most evaluation problems because there is nothing in this method that highlights the important facts (see page 24 in HCI handout 4).
They only allow comparisons between the same variables and are not ususally appropriate in comparing different variables.
They only show if there is a difference between the systems tested and not whether the difference is significant.
They are expensive in resources such as time. (see page 24 in HCI handout
4 for further information).
It's usefulness depends on the outcomes you were looking for.
The above drawbacks must be considered against the desired outcomes.
Measurements can be open ended or comparable. Whether they are quantitative or qualitative does not matter but one cannot be compared against the other.
An e-mail was sent to the class asking for some general comments on the lecture. The response was very, very poor.
So feel free to send your comments to the people below.
Acks to all contributors