The Mood Gallery: Critique of Virtual Voyager Evaluation
bar.gif - 1.6 K
January 1999
The Mood Gallery
   
Introduction
 The Mood Gallery Team were involved in the evaluation of the "Virtual Voyager" project. As part of the exercise, we were asked to produce a critique of the development team's evaluation plan.

In their on-line document (at the time of writing), the Virtual Voyager Team proposed the following 3 main evaluation strategies:

  1. Observation
    The user will be issued with a task to complete: 
    Locate a particular point of information about the flora and fauna at the Causeway
    The user will be observed on how easy he/she can navigate their way through the product and complete the task as described above.  This will be used to test the efficiency and effectiveness of the product.  If the user can complete the task quickly and effectively and has no problems locating the point and its information then the product  is simple and easy to use.

  2. Prior/New Knowledge 
    The user will be asked five questions about the Giants Causeway, before and after they enter the prototype. The answers from these questions will give an assessment of the users prior knowledge of the location  and a comparison will be made with what they have learned after the voyage. If they can answer the following questions then this proves there is an element of passive learning within the experience. 

    (a) How was the Giant's Causeway formed ?
    (b) When was the Giant's Causeway formed?
    (c) What type of rock is the Giant's Causeway made of?
    (d) Where is the Giant's Causeway?
    (e) What legends are attached to the Giant's Causeway?

  3. Interest
    At any stage during the prototype the user can quit or return to the menu page. If the user select 'quit' before they end, we will assume that the prototype has failed to maintain their interest. If the user selects 'return' at the end of the voyage we will  to begin a new voyage or revisit the present voyager, the product will be successful.
Was the Virtual Voyager Evaluation Clearly Defined?

Because of the relatively early stage of the prototype, the observation strategy as described was difficult to implement and required significant intervention. Given a fuller functioning prototype it is easy to see that this would have been more effective.

The prior/new knowledge strategy was very clearly defined. The evaluators were asked to record their prior knowledge of the Giant's Causeway. Again, this strategy would have benefited from a fuller prototype and perhaps more and varied questions. The problem with these particular questions was that if you had happened to have visited the Giant's Causeway and it's visitor centre, You would have been able to answer the questions - making it difficult to assess the effects of the software.

The interest strategy also would have been more effective on a fuller prototype. Only one "voyage" was available in the prototype evaluated so it was not possible to begin another. Due to the nature of the navigation the voyage was not really linear so it would be difficult to come to an "end" since theoretically you could wander around indefinitely. This strategy was not very clear.

A further questionnaire was given at the end of the evaluation which does not appear in the on-line documentation. The questionnaire was clear.

Were the Virtual Voyager Evaluation Criteria Clearly Linked to the Brief?

The evaluation criteria were all clearly linked to the brief.

Suggestions

The evaluation criteria designed would suit a fuller functioning prototype. As mentioned previously, the range of questions asked to determine added educational value would need to be broader. More thought should be given to the interest strategy, adopting a less linear approach.

bar.gif - 1.6 K

Virtual Reality Mood Gallery
Project Notebook
Paul
Sarah
Shane
Ted
Last Modified January 12th, 1999