Web site logical path: [www.psy.gla.ac.uk] [~steve] [courses] [APEC] [this page]
In considering what makes a good exam answer likely to earn a high mark, you should of course first consult the general level 4 handbook. In addition, there is a new-ish section on generally what makes a first class answer in the Critical Review document that is relevant. For this particular course and topic, I will also be particularly interested in demonstrations that show you know how to apply the various theories to specific concrete cases and examples, and cases that are your own rather than given in the lectures will attract extra credit. Discussing how a case seems to pose a problem for a theory because it might be analysed in more than one way, or because the theory seems unable to address some important aspect of the case, will also allow you to demonstrate critical thought in this context. Finally, not all questions seem equally easy to answer. A specific question is easy to answer, and probably easy to get a 2.2 on. They may however prove pretty hard to get a first class mark on, because the more specific they are, the less they offer openings for original or critical argument. Conversely, picking a vague or difficult question is likely to impress an examiner from the start; if you then show you can reduce such a question to a sensible one, explain your approach to it (and possibly critique the question itself), show you can select what is relevant out of what you know (rather than being dependent on the question to tell you), then you deserve credit for the understanding and intellectual work you will have demonstrated.
There are cases where people fail to learn something despite many exposures AND using (but then forgetting) the information in a task e.g. which of two similar keys to use, which of the marks on the "ruler" governs the paragraph indent in a word processor. This suggests that when the costs of rediscovery are low, learning may not always be worthwhile. This shows that this is not an obviously silly question. With minimal manuals, there remains a tension between optimising for learning (so users don't have to use the manual again next time) and optimising for doing i.e. for getting their immediate task done as fast as possible. So there is a tradeoff between short and longer term gains, with learning only desirable in the longer term. In education learning seems to be the main point. And yet even here, there is an issue. If we look at experts such as librarians, but in fact any professional, they may know more facts than a non-expert, but they certainly never know them all. Instead their expertise could be argued to be more in getting information quickly, than in already having learned it. Another issue is that of deep learning: no topic is every known completely, there are always more things to know. So even if learning is desirable, it cannot be necessary since it is always incomplete.
What are the common themes running through this course, and what, if any, are the differences that prevent it being a wholly unitary topic? Discuss what might be changed, added, or dropped to enhance its coherence, and whether there would be any disadvantages to this.
This is an invitation to reflect on the course as a whole. The common theme is applying a set of alternative empirical instruments, selecting the best ones for the specific topic, but where possible using several different methods to achieve "triangulation". The difference is in the greater apparent utility of theory in education as opposed to the HCI areas. The aim of asking them what might be changed is twofold. Firstly to display critical argument about course content, and the marks would be awarded for the quality of the arguments offered in support. Secondly, if they chose to interpret the question at a lower level, to discuss the actual teaching methods used (e.g. should there have been more or less discussion?, more or fewer specific exercises?) using the theories of teaching and learning presented in the course.
Although the empirical methods presented were stated as applying to any applied problem, the examples of using them were all for HCI: one change for greater unity would be presenting detailed cases of applying them to education. The theories applying to education were discussed in greater detail than in HCI: perhaps another change towards more unity would be to show the application in more detail of the theories of aspects of HCI that were presented.
Discuss whether material should be tailored for individuals, or specific subgroups.
In one area of HCI (minimal manuals) the arguments suggest that different written material may be needed for different user groups. However in education, an argument was presented that we should not interpret learning styles as implying that all textbooks etc. should be written in several different versions for different classes of learner. These two arguments are in at least apparent contradiction. One resolution may be in terms of the timescales and educational aims associated with each (rapid task success within seconds for software manuals, the desirability of learners acquiring the "styles" and learning skills required for a given area (over the course of a degree programme) whether or not they already have them.
Discuss the similarities and differences in the ways that general applied psychology methods are suitable for HCI and for education evaluation.
Invitation to reproduce the lecture(s) on overall method: two not one phases, first open-ended seeking for the issues, second uses comparable (quantitive) measures on the indicated issues. Importance of open ended measures. BUT the difference is in role of theory: little need in HCI, but crucial in educ. to establish the factors probably in play.
To what extent, and in what ways, does the method of trial and error surface in HCI, in Laurillard's model, and in educational ideas of reflection and metacognition?
HCI: the design cycle especially of thinkaloud protocols and modification of the UID. This is trial and error, with little obvious theory contributing to the interpretation part of the cycle. It is the raw power of trial and error to improve user interface alone.
Laurillard-model: the principle of iteration and convergence hence Laurillard activities 3&4 as well as 1&2 etc. In a way, it gets at the heart of "understanding" too: we think you understand if you can reproduce/use the "same" thing in different cases; and conversely that is a big part of how you learn, by trying to use (re-express) what you know and getting corrections.
Reflection: on a professional's actions, on one's learning. Same as the HCI cycle, but applied to an individual human improving their own practice. Can apply that too to a learner improving their learning practices.
Metacognition: the only well established effect is self-testing for understanding and relearning as necessary. So we could read this literature as again reaffirming the cycle: the crucial bit is to have a cycle, to attend to and act on feedback. Appealing but not of proven worth, or not necessary but only better, is to consciously understand the mechanism to make a diagnosis more accurate.
Reflection and reflective diaries: not really the same, although reflective diaries can be mainly about incidents too. Reflection is about interpretation by the writer (learner); Incident Diaries about behaviour or occurrences and the interpretation is done by researcher.
Another, more conventional kind of question, would just pick an obvious topic from the learning objectives. E.g. "Discuss the properties of focus groups". This would expect a description of what they are, with a little example of an application: the purpose, the agenda used etc.; and a discussion of the properties as given in the handout. Examples drawn from focus groups you and run yourself would attract extra credit, especially if this make you critical of problems not well discussed in the lectures, or able to illustrate their good points from your own experience.
Chief virtues: on the spot data gathering (does not depend on subjects' memories), and combining data on behaviour and intentions. Limitations include cost in investigator and subject time, and interference of the thinkaloud task with the task being studied. Incident diaries save the investigator cost (but have compliance problems). Variations include videotaping subjects without thinkaloud, and asking the subject to add commentary when later reviewing the tape. Illustrations with examples will be important for a high mark.
Describe the documentation technique of minimal manuals, how it contrasts to other writing tasks, and its relationship to the design of user interfaces to software.
An invitation to reproduce the theory and practice of minimal manuals with special mention of how different it is to normal writing (assume the reader has the software in front of them and do not describe what they can see or experience, only what is invisible), and of how this writing is designed and modified identically (by prototyping and iterative design) to how user interfaces are developed. In fact, manuals are really paper screens, to be designed and used as part of the software.
Using the example of how people find their way around campus, discuss how bits of theory and different kinds of empirical study can be brought to bear on a single applied problem.
There is a literature on the stages people's representation of a place such as a campus goes through, and on the skills involved in finding one's way. Many types of study are relevant, and are much better when used in combination. Lab studies can probe what people remember: both place names, routes, and ability to sketch maps. New designs of aids, such as signs and maps, can be tested in some ways in the lab e.g. timing how long it takes a subject to find and point to a target on the map. Field studies following subjects around can investigate how well they actually perform on complete tasks. They can also observe where, and in what way, they experience difficulties. Interviews can probe people's accounts of their worst experiences, and so focus an investigator's attention on current bottleneck's in performance on a particular campus. All these methods are subject to criticism: but in combination become much stronger as evidence.
Questionnaires, diaries, checklists are all paper instruments with pre-designed questions. Semi-structured interviews, focus groups, and thinkalouds (though usually this doesn't involve prepared questions) all involve interviewing people. The set of properties (retrospective? costs? ....) used in the handout gives a way of comparing them.
Studies of wayfinding on this campus led to some recommendations for improving the campus map. Discuss how well founded these recommendations were, both in themselves and with respect to whether they addressed the most important issues. For each criticism you offer, propose how it could be addressed by further simple studies.
The recommendations included redesigning the map index in several respects (including all building names and all department names, sorting alphabetically, changing the map key system e.g. "A3"), changing from 3D to 2D style, and including pictures of key landmarks. These were based on empirical studies, but small ones; and improved studies of the basic findings are easy to suggest, as are studies of the effectiveness of the proposed changes.
Discuss usability and user centered design for both computer software and more generally. Why are they important, and which of the reasons offered are the most important? What, in your opinion, is the best formulation of the aim or aspiration, out of the variety put forward in the literature and computer industry?
This invites an answer from the first 3 lectures, covering the prototyping cycle but also with a discussion on the different senses of "user centered": as on behalf of the user, asking the user, inviting users to participate in the design.
I.e. the on the spot instruments: thinkalouds, diaries, experiments. Drop marks for including wrong instruments. It's important because people remember such a small proportion of the little actions and thoughts that make a difference in interaction. Extra marks for examples not given in the lectures and handouts, because that demonstrates thinking about the material.
Discuss the view that minimalist manuals are just the application of user centered design to documentation rather than to software. Does this lead to a method of writing that contrasts to normal recommendations for writing?
(only) one view of minimal manuals is that you just apply the prototyping cycle to a manual and keep changing the design until it best supports what users want the manual for. The principles were arrived at on the basis of essentially that kind of research, and it also needs to be applied in developing any particular manual (though hopefully with fewer iterations, if the principles are indeed a useful extraction of general lessons).
Consider a practical problem you have not studied before, such as developing novel improvements to home security systems. How might you begin to address this empirically, and what does your education as a psychologist contribute to this?
An invitation to apply a general applied psychology approach to a novel problem: multiple small studies, peferably of different kinds ("triangulation"), with the early studies leading to the questions and instruments used in later studies. Psychology training principally helps by supplying ideas on the instruments to use and how to use them.
What in your view is the most fundamental principle in HCI? Does it apply only to computer technology, or to other areas as well?
What are the relevant ways for comparing alternative evaluation techniques when making a selection for a study? Illustrate with examples including focus groups.
Comments on answer approaches:
Really a simplistically direct question on the UIPM framework. Poor answers wouldn't even be bothered to restructure their knowledge to fit the question. What I'm looking for is evidence of understanding. Direct paraphrase shows understanding of my sentences, but possibly not of meaning. Restructuring of knowledge, applying it to a question or problem, is better evidence. Another kind of evidence is novel examples they have thought of, especially if from their personal experience. A third kind of evidence possible in this question is to show you have grasped the point of the framework, but discussing or creating an example of a specific evaluation task and how you use the framework to select instruments.
An attempt to get a discussion, mentioned but not developed in the lectures, about the status of this work. The published stuff gives 6 guideline principles for writing minmans. But they are really induced/inferred from the persistent application of the HCI recipe of repeated redesign, while reacting to user problems; and doing this from scratch should still work, only slower without the guidelines to give better first-draft designs; and iteration is still v. important with any manual design. Could see a UC attitude as the prior motivator for either applying iterative design and test OR for many or all of the 6 guidelines, which in various ways can be seen as derivable from the UC approach: from a) wanting manuals to be usable themselves in their real context of use (beside the screen); b) wanting to achieve usability of the software, using the manuals to fix problems in the UI itself.
You are going to redesign the user interface to a statistics package. Given access to users of the present version of it, what instrument(s) would you select, and why, for drawing up your first list of problems and suggestions for redesign? Which instrument(s) would you then choose for studying the adequacy of the first version of the redesign?
One source would be to use the diagram of the cycle with 2 possible starting points to the cycle: either focus groups with potential users asked to imagine what they would like, or better (the one relevant here) survey existing users about the problems in the existing UI. Actually observing them too would be good. (Give a mark for mentioning cycle and how to start it.)
Questionnaire/fgroup/SSI on main problems reported on current version: would get exhaustive list. But also: thinkalouds on test tasks with the UI. Former would get most salient problems and data on how widespread. Latter would get at things they wouldn't mention.
For cycle 1 after first prototype: thinkalouds. Answers should comment on how in a stats package, there's a huge number of features (so feature checklist much more important here); and separately on the issue of UI bugs (fgroup important for this).
Aim to change either the documentation or the data projector's user interface itself. On the basis of thinkaloud tests.
What is the role of feature checklists in HCI? Give an example of a problem where they would be useful.
To measure relative frequency of function/feature/command usage. Any big UID, to discover which bits are critical (for optimisation, for debugging), which less so, which should be omitted to avoid clutter. N.B. Google does this: so a large percentage of their users must use a new feature in a trial, for it to be included in the main UI; somewhat fewer users for the feature to make it to a specialist UI e.g. "academic Google".
Optimisation for high frequency commands might include: execution time, better UI, inclusion at top of menu, top of menu hierarchy, special shortcuts, ...
You have decided to give an elderly relative her first mobile phone, and to write a minimal manual for it. Describe the process of developing such a manual in terms of an iterative cycle. To what extent could the whole idea of minimal manuals be derived from the general HCI idea of iterative development?
Not completely: the issues of using text for doing, vs. using the text itself don't quite come from that. Otherwise an invitation to outline the process, and illustrate it with an example.
[Outline answer] In a 40 hour course with 80 students, that would mean 30 minutes talk by each student in the course: which sounds modest but would be a big increase over current practice. This would correspond to activity 2 in Laurillard's theory, currently largely unsupported in level 4 courses. If the seminars required students to write papers for them, that would be a further increase in activity 2. Assuming the discussions did have a significant to and fro quality, then the iteration represented by activities 3 and 4 would be supported, as they are not by lectures. Assuming that exposition can be done as well by books and papers as by a lecturer, there would be no downside. If the discussion focussed, as is quite likely but not guaranteed, on the relationship of the concepts covered to students' own experiences and examples, this would address to some extent other parts of Laurillard's model. Whether the lectures they replace would do that is less likely, but might have been the case.
Deep learning is likely to be better served by seminars to the extent that discussion does focus on issues the students did not understand straight away, or on examples as opposed to abstract descriptions, or on other senses of "deep". Furthermore, seminars should bolster attention to and skill at argumentation (i.e. "critical thinking"). Similarly, acquaintance with alternative views and the relative strengths and weaknesses of each: a point that relates more to Perry's theory.
Perry's theory concerns getting away from a naive belief in a single right answer or view and considering rival merits of alternative views. Discussion is likely to promote this, although only a strong debate format would emphasise this.
It would be sensible to comment firstly that the actual benefits would depend rather strongly on how the discussion was conducted (e.g. shy students, how to avoid a few doing all the talking, or tutors degenerating into lecturing); but equally, the benefits of the existing lecture system do too e.g. a number of lecturers already hold substantial discussions in "lectures". Another sensible comment in no way implied by the question or the lectures on this topic would be that a strong determinant of how much learning occurs is the number of hours each learner puts in independent of content: so a shift to a seminar system could be analysed from the simple, cynical, but important point of view of whether it would elicit more or less actual work (hours of mental processing) from students. Another is that feedback TO teachers on a) students' knowledge and ideas, b) on the effectiveness of the teaching is also important and also affected (probably for the better) by the proposed change.
Answer outline: This was an invitation to write about the TLP management paper/issue. Officially: Ts only do the management. Unofficially, mass failures or refusals or just poor performance affect things both at once and in reconsidering course design for next year.... I was hoping for a discussion of management issues, covering: curriculum, admin i.e. time and place for M-acts, selection and specific content of M-acts. Also feedback to Ls and feedback to Ts.
Tourists in Cairns, Queensland may visit a "Mangrove boardwalk" (a short walk with information notices through a mangrove swamp), UK television watchers may see occasional "Nature" documentaries on mangroves, and students taking courses on ecosystems may study their importance. Discuss how these different settings for learning, and the different motivation structures learners may have, are observed to affect the quality and quantity of learning that occurs.
I loved this question, but it didn't offer an easy in for students; but I suppose it did test how well they were able to think about the meaning of the theories.
Expecting a discussion of BOTH deep&shallow AND the effect of having the goal of learning (in HE but not in informal learning). Many discussed Perry, though only one made it sound a bit relevant (ecology course at univ. should present balanced views of the issues). Many discussed Laurillard, and of course that does have some relevance. At least one discussed T vs. L, and the fact that only in the course could L interact with T.
I definitely wanted a discussion of public vs. personal experience: linking the sounds and smells of the swamp to features; and how museums etc. and usually TV major on that link i.e. give you the experience, and write about the experiences conceptually. Also to address deep vs. shallow in relation to these personal experiences i.e. how the actual swamp or a striking TV makes you wonder about things. And a discussion of the main motivation issue: a) most TV and zoo visitors do not have a prior or extrinsic interest, so quantity of effort is low and so is the learning. b) but this will be variable: a small minority will have that prior motivation, and then will learn more. c) Intrinsic: a few will be so captured by what is presented that they will learn a lot. So probably the variation (individual diffs) will be big with TV and zoo.
 Qu: Is critical thinking a cognitive skill independent of subject matter, or are there relationships that should be considered separately for each specific topic?
Select a topic, preferably small, that you have considered teaching, or which you have been taught. Describe the methods and activities used to teach it, and discuss these with respect to Laurillard's model. Are all Laurillard's activities covered, and if not is this a weakness in the teaching described or is the apparent omission justifiable? Does the model help you identify the actual weaknesses in the teaching, or would some other theory be needed for this?
Actually a repeat of the exericse on L-model. Look both for demonstrating
knowledge of the L-model on a specific example, and for a critique of its
adequacy in covering all that is necessary.
It has been observed that over their four years many students go through considerable changes in attitude towards the department and the role of academic staff. To what extent can this be reasonably explained in terms of theories of changes in attitude towards learning and the nature of knowledge? Should we (and do we) equally see changes in attitude to the student's role?
The questionnaire of Perry position gives implications for 4 aspects: nature of knowledge; roles of teacher and of learner; exam type. Actual personal experience of most of these students is to move from thinking academics (teachers) are like school teachers and the whole point of their job is to support learners to realising they do research too. But there are rival issues for explaining this e.g. feeling at home, and how this is supported or not by staff treatment of students.
You are working for a large national charity, and have been told to organise introductory training for new volunteers using "elearning" i.e. computer network methods, because volunteers join in a steady trickle and not all at once like undergraduates. What specific learning activities and materials would you propose, and could you cover all the abstract types that ideally you should? Would you insist on providing some of the activities by other means, and if so why?
It is an invitation to think on the spot about applying Laurillard-model to a small course design. I should probably have spec. the material/topics to be covered. Can do exposition by online stuff e.g. pdf documents, etc. Could try to support T&L interaction by email if you can get someone somewhere in the organisation to act as (e-)tutor. They will get some practical experience as they do their job. May still want to insist on a training course F2F somewhere.
Answers should really address the issue of how to use e-learning for each of the L-acts e.g. marking essays.
Could thinkaloud protocols be applied in educational evaluation? Give examples and reasons for and against this suggestion.
Could sit over someone trying to use a resource, at least a written one. Or trying to do a revision task; or a practical one. In fact this would probably be quite powerful of a) missing information; b) reveal learner thought processes to contrast with what teacher wants. There would be a big interpretation gap to link observations to diagnosis w.r.t. L-model or other theory. In general there may be much bigger gaps in the "user"'s knowledge in education: gaps in method, and in goal as well as bits of information. But still: good idea. Main need would be tasks to set learners while thinking aloud.
Invitation to write on DrFox. Essential answer is: always makes a difference to the learners' reported enjoyment; if extrinsic motivation then it doesn't affect learning; if not, then makes a big difference to learning.
Possible interesting extra point: what about not lecturing but learning by exploration approaches to teaching and learning?
What about retention length (how long the material is remembered)?
What is it in "expressiveness"? e.g. enthusiasm, eye contact, ...
Analyse whether and how each of the activities in the Laurillard model are addressed in the following case like this, and if necessary how they could be done better in an ideal future TV series designed to optimise learning outcomes. Delia Smith's cooking programmes on TV, together with the accompanying books, have changed many people's cooking in practice: clearly they are examples of effective teaching. They have a number of features that not all cooking programmes have: there have been a number of series over the years, each influenced by what went well or less well in previous series; readers' letters are explicitly attended to; they have sections on ingredients and on equipment, not only on recipes themselves.
Books and TV are L-act (Laurillard activity) 1. Letters from readers are L-act 2,4, and lead to later series: 3. On the other hand, there is no real 2,4 within one TV series, or one learner's learning i.e. the feedback to teacher works OK, but feedback to L is really only in the personal not the public domain. Perhaps email/internet could offer a feedback service (particularly with webcam photos) if it were financially feasible to pay for her time doing it or for e-tutors; or rely on peer doing it if common webboard. Trying it out are L-acts 6 (given the recipes which are L-act 5). 7,8 are repeated attempts at cooking in the face of feedback from "Teacher's world" i.e. cooking success.
L modifies practice in light of theory: re-reads her stuff, and tries recipe
again. E.g. her general bit on a utensil, on eggs as ingredients, etc.
L modifies theory in light of practice: understand some procedure or description better, given experience of cooking some recipe.
Teacher modifies practice/task set in light of L's experiences: changes recipes.
Teacher modifies theory in light of experience with L's task: change book next time, modify recipes over time.
An invitation to describe the Clark/Kozma debate, to select a personal position, and to argue it.
In the Laurillard model, nothing is said about where specific activities and their content come from. How would you model this issue? How does your model accommodate various cases, such training for the car driving test, adapting teaching to individual learners' styles, allowing students to study the options they are interested in, etc.?
An invitation to describe the "management layer" extension to the Laurillard model; and to illustrate it with how it would apply to various specific examples.
How do differences in the importance they give to critical thinking manifest themselves in student's attitudes to the roles of teacher, learner, and examinations? How would you combine this issue with Laurillard's model, or alternatively is it incompatible?
An invitation to discuss Perry's view, and its effect on attitudes to the different roles. Perry's issues are buried within the "upper" level of public conceptual knowledge (the first four activities). For low Perry position, convergence would be to a single agreed truth; for high Perry position, convergence would be to a set of known alternative views, together with knowledge of how evidence differentially supports these alternatives.
Constructivism refers to ideas about learning and education, but could you make an argument that human computer interaction exhibits the same fundamental human characteristics?
Not altogether. Best shot: we are influenced by what we know; and by trying to do stuff (to act) which then causes learning as a side-effect; and by our tendency not to learn unless we need to. The interaction of personal experience and abstract concepts is in common.
This is not a topic I lectured on in itself, but looms over many of the topics (and the background reading): e.g. deep and shallow learners are distinguished by the goal they typically mention if asked (trying to understand, vs. trying to learn). Discussion of intrinsic vs. extrinsic motivation would also be relevant. It would most basically be reasonable to discuss the fundamental point that, in HE, learning largely depends upon the learner's deliberate intention to learn, and therefore on their motivation and their ideas about how to achieve their goal. And so on the relationship between goals (what the learner is trying to do), their strategies or methods (e.g. rewriting their notes, discussing with peers), and perhaps implementation issues (how easy it is for them to find or create opportunities for each method). And also of any cases where learner motivation may NOT be important e.g. peer discussion, as in Howe's experiements or Mazur's method for using EVS, stimulates learning without the students deliberately aiming for it.
"Discuss various ways of using electronic voting systems (EVS) in education, and whether and how each use might be justified in terms of theories of the learning and teaching process."
There's a list of pedagogical uses in one of the published papers, and you might add ones you invent or have come across. The most easily related to educational theories discussed on the course are:
There are plenty other reasonable connections to suggest too e.g. collecting (psychology) data using EVS links a student's personal experience as measured by the question to the rest of the classes and to the theory behind the questionnaire item.
"In many degree programmes, the final year project is often an important feature both in terms of the amount of credit given for it, the effort put in by both learners and teachers, and the learning benefit both believe comes from it. Discuss how Laurillard's model does or does not apply to it as a teaching and learning process. Are there important aspects of such projects that do not seem to be captured in the model?"
You can probably find a match to most of Laurillard's activities within a maxi
Exposition (1): reading the relevant literature rather than lecturing, or past reading. 2,3,4: probably discussion with supervisor. Lots of activities 9-12 to and fro theory and practice as applied to the project. And getting the actual experiment to work: 5-8. However Laurillard's model doesn't really describe some important features e.g. a) the project brings together many skills probably taught and learned separately e.g. experiment design, statistics, handling subjects, understand the theory of an area: i.e. connecting skills learned as parts into one integrated whole. b) "Authenticity": the way a project often connects what is learned within a degree to a student's outside motivation e.g. the job they hope to do, or applications of their subject that they personally care about.
"In what ways is peer interaction different from typical teacher-learner interactions? Could learning with no peer group contact at all, i.e. no class or community of learners (e.g. distance learning without email, or having a personal tutor just for one learner), be as effective as the traditional learning in a group?"
Teacher-learner is not a substitute but a complement to peer interaction. Probably could get as good outcomes, yet peer interaction is an important type of L-activity (regardless of the Laurillard model as published). Firstly, peers are more likely to elicit genuine dialogue than a teacher in practice. And typically, peers stimulate more thinking because a learner genuinely doesn't know whether to believe them and so has to think more and reason more. Thus peers simultaneously will be listened too more carefully, yet will cause more critical thinking. Peers can be more successful at producing explanations at the right level for a learner. Peers furthermore provide a wholly natural occasion for a learner to produce real explanations, as opposed to artificial exercises like essays where something has to be explained to a reader (the teacher) who already knows it all. On the other hand, in learning alone or with a single teacher, the whole time is spent on what that learner needs: not on dealing with other learners' problems, or waiting for it to be your "turn".
"In telling a story, whether fact or fiction, or even in writing a persuasive argument, the reader is implicitly required to "suspend disbelief" and take a passive role. In what ways is writing documentation for practical tasks different, and what are the consequences for developing such manuals?"
Because (with minimal manuals) the reader is active, a) they won't read it in order, b) they are continually checking with the real device and trying to bring in their own facts. c) they aren't interested in the writer's order or task or conclusion, but in their own: i.e. even in (non-minimal manual) writing that presents an argument, the reader has to orient to the writer's endpoint even if you critically question their supporting arguments.
"Briefly state what thinkaloud protocols are, giving an example of their use. Then name four other instruments, and for each of these give one important property they share with thinkalouds, and one contrasting property."
This was the weakest exam question because a) no link between HCI and education; b) it has a large part that requires only reproduction, although the question format is not quite standard.
Possible answers to the second part could be:
Diaries: Similar: on the spot; Different: cheap for researcher, Subject's-judgement
Interviews: Similar: expensive for researcher, Researcher-judgement; Different: on the spot.
Focus groups: Similar: Uncontrolled, a prompt that isn't the researcher; Different: retrospective,
Checklists: Similar: ? ; Different: Subject-judgement, retrospective, cheapness,
Quaire: ?; Different: Researcher-cheap, Subject-judgement
Experiments: Similar: expensive for Ss, (and researcher) ; Different: natural, open-ended
"First, briefly describe the use of feature checklists in HCI. Secondly, suggest one or more ways in which they might usefully be applied in improving the design of a minimal manual."
Checklists are usually applied in HCI to getting information about which commands or features are used, how frequently, and how useful they are to users of the device or software. They could be applied in two or three ways to the design of a minimal manual for a piece of software. Firstly, by surveying users of the software they could help select which commands are most often used and/or seen as most useful. This would be done in advance of the first draft version of the manual. Secondly, after a stable version of the manual was in use, they could be used about the manual: which entries were in fact most used and/or most useful. Thirdly, this might have been done on a previous version of the manual to see which of its entries was most referred to.
"Describe the iterative design cycle and its central role in HCI. Is such iterative improvement important in learning and teaching too?"
The first part of this invites a straightforward description of user testing, observing symptoms, making interpretations, suggesting remedies, and changing the design. The second part addressess how the notion of reflection in education could be seen as essentially the same cycle, but applied to a learner's partial understanding of a topic rather than to designing a piece of software. It similarly corresponds to the super-principle of iteration and convergence underlying the Laurillard model, to reflection by learners as mentioned, reflection by teachers, and to teachers modifying their expressions of content in the light of learners' expressions. It furthermore might possibly be mapped to deep learning if we see deep learning as continually checking any new idea against existing ones, which may then be modified.
"Simplistic ideas of learning and teaching tacitly assume that learners just do what they are told, both as to what they should learn, and what they should do to learn it. Discuss more sophisticated views of how these decisions are made, with examples from current practice. Is it desirable that learners regulate or manage themselves to a greater extent, and what might that involve?"
This question addresses the suggestion of a "management layer" parallel to the Laurillard model, but applied to decisions about learning rather than to grasping the content itself. The main feature of this idea is that in reality, such decisions usually depend on both teacher and learner, even if it is more usually the teacher who proposes something. In reality, university students often study for 4 hours for every contact hour, and how they use that time depends on them alone. Various other ideas may also be related to it: enquiry based learning (where students more explicitly decide how to go about learning a topic); lifelong learning (where presumably learners also decide their own learning objectives); the notion of a dimension of "proactiveness" which describes how for every teacher driven version of an activity (e.g. requiring students to do group work in a tutorial) there is a student driven version (self-organised study groups).
"Outline Perry's original conceptual framework and subsequent feminist developments. To what extent do you feel the latter are important for a) understanding learning in general b) understanding typical HE students?"
An invitation to describe Perry's developmental scheme, then modifications suggested by Belenky et al.
Some of the additions by Belenky et al., particularly that of the "silenced" position, are unlikely to apply to HE students (it was identified in work with abused women). However their sample of women addressed a major gap in Perry, and led to the notion of connected vs. separate knowing which certainly does apply in HE.
"Can technology cause enhanced learning? If so how; if not, then what does? Discuss these issues, if possible with examples."
An invitation to outline the Clark vs. Kozma argument, and then either continue with both, or to pick the one you believe and justify it. If not: it's the teaching method / learning design that causes learning. If so: it probably works mainly by making good practice affordable. E.g. classroom voting gadgets gets everyone to answer even in giant classes.
"In the Laurillard model, the four 'reflective' activities are often the hardest to see being overtly supported in course designs. Why might that be? Do other theories of reflection add anything important?"
Reflective activities tend to be less visible because they don't involve both teacher and learner i.e. not in contact hours. But you could of course require reflective writing to document it. Actually EVS (voting gadgets) are learner to teacher feedback that makes teacher reflect all right. However both discussions and handing in assessed work tend to cause reflection, so a course designer intent on promoting reflection could probably do so easily; so the apparent invisibility (if you agree it tends to be invisible) may just be that it is not a conscious priority with most teachers.
Qus.6 and 2 are rather close; but iteration and reflection aren't identical: iteration is often between teacher and learner, but reflection by definition is solo.
Web site logical path:
[Top of this page]