The Mood Gallery Team were involved in the evaluation of the "Virtual Voyager" project. As part of the exercise, we were asked to produce a critique of the development team's evaluation plan.
In their on-line document (at the time of writing), the Virtual Voyager Team proposed the following 3 main evaluation strategies:
Because of the relatively early stage of the prototype, the observation strategy as described was difficult to implement and required significant intervention. Given a fuller functioning prototype it is easy to see that this would have been more effective.
The prior/new knowledge strategy was very clearly defined. The evaluators were asked to record their prior knowledge of the Giant's Causeway. Again, this strategy would have benefited from a fuller prototype and perhaps more and varied questions. The problem with these particular questions was that if you had happened to have visited the Giant's Causeway and it's visitor centre, You would have been able to answer the questions - making it difficult to assess the effects of the software.
The interest strategy also would have been more effective on a fuller prototype. Only one "voyage" was available in the prototype evaluated so it was not possible to begin another. Due to the nature of the navigation the voyage was not really linear so it would be difficult to come to an "end" since theoretically you could wander around indefinitely. This strategy was not very clear.
A further questionnaire was given at the end of the evaluation which does not appear in the on-line documentation. The questionnaire was clear.
Were the Virtual Voyager Evaluation Criteria Clearly Linked to the Brief?
The evaluation criteria were all clearly linked to the brief.
The evaluation criteria designed would suit a fuller functioning prototype. As mentioned previously, the range of questions asked to determine added educational value would need to be broader. More thought should be given to the interest strategy, adopting a less linear approach.