You are here

Evaluation methodologies for archaeological museums and sites

Laia Pujol-Tost (GR)
The discussion about different possibilities of ICT devices for open-air museums (see Heritage Presentation and New Media in Cultural Heritage settings) is useless or at least heads for a half failure if we do not know what visitors expect from a visit and/or if they are satisfied with the current experience. This is what evaluations are aimed at.

This presentation was given at Zeitgeist meeting in Bellaterra & Sant Llorenç, Catalonia. During a class on new media and reaching / teaching the public.

Three kinds of evaluation depending on the moment in which they are carried out:

Front-end:
• At the beginning of a project, you gather information about potential visitors
(characteristics and interests). This is also called a “visitor study”.
• Questionnaires at specific close locations (town, other cultural facilities), focus groups
with the community or phone interviews for non-visitors.
Formative: While you are developing the project, you may want to test if your proposal
will succeed. Selection of a target group and questionnaires, focus groups or observation
of tasks or use of prototypes.
Summative: When an exhibition is open, verify if the goals (with regard to visits, learning,
etc.) are achieved. It can be considered as well “remedial” if, having detected negative
issues, remedial solutions are undertaken, but knowing that the extent to which changes
can be done is limited.


There are also different kinds of data:

Quantitative: numerical, general and standardized, allows statistical treatment
(percentages and correlations)
Qualitative: verbal or observational description, does not allow statistical treatment
but is more fine-tuned, provides details.


Evaluation methods

And there are also different methods depending on the kind of information sought:
1. Observation:

Data gathered by:
• Following discretely visitors during their visit
• Through video recording (the observer does not intrude in people’s visit + analyses can be performed at any time / but raises ethical issues).
• Consented tracking (the evaluator follows during the visit and the visitor “thinks aloud”).
Standard indicators have been defined (but it can be any other):
• Passing: accessibility of an exhibit in the general visit path.
• Attracting power: % of visitors that stop.
• Holding power: time spent at the exhibit or activity by visitors.
• Tracking: (more qualitative measure) general path and behaviour.

How design this evaluation strategy? A standardized sheet including the museum’s floor plans,where you drawing visitors’ paths, write their timings and describe their behaviour through a set of typified actions.
2. Questionnaire:

If self-administered, less resource-consuming, but more errors, biases and time to reach
the suitable sample.
Different kinds of questions:
• Open: just pose the question and leave a blank. Are aimed at discovering new information. Do not allow statistical treatment and extrapolation but can be categorized.
• Closed: Dichotomy (yes/no); multiple-choice (quantify appearance of pre-defined categories); Likert scale (degree of satisfaction or agreement, from 1-5 or 1-7, always number and word). General and standardized, allow.
Questions related to:
• “Demographic”: aimed at describing audiences, getting a visitor or non-visitor profile (gender, age, origin, education, group composition, interest in archaeology, frequency of museum visiting, motivation to come, experience with computers, opinions related to what you are planning to do – e.g.: introduction of ICT).
• Satisfaction: which activities carried out during the visit and things liked/disliked with
regard to exhibits and space.
• Effectiveness of mediators: comparison between exhibits with regard to engagement,
accommodation of group needs, learning, relevance to daily life, etc.
• Learning outcomes: are very diverse, since they might measure degree of understanding / memorization / meaningfulness with regard to factual / methodological / attitudinal knowledge. May use specific test questions (evaluate immediate memorization or understanding) / ask the thing that comes first to their mind (evaluate impact). In formal experimental evaluations, researchers use pre/post-test method with a control group. To see middle or long-term result contact them again to ask the same question.

3. Interview:

More qualitative, it is a semi-conducted conversation. Starting from specific
questions aimed at obtaining specific information, you can modify depending
on interviewee to deepen in specific details.
Interviewer(s) control the sample (avoid errors and biases) but more resource-
consuming.
Write on a questionnaire sheet / voice recording to make it more fluent (but
probably less comfortable for interviewee).
Similar subjects in comparison with previous but more open questions.

Focus group:
The most qualitative tool, aimed at deepening in specific aspects through semi-
conducted conversations in group or by completing exercises (free listing,
drawings, association schemes, etc.).
Cybermetry:

Added by the introduction of ICT.
It follows users’ paths and actions in onsite or online application (log analysis): number of visits, duration, days of week and rush hours, domains/countries of host's visitors, most viewed, entry and exit pages, browsers used, keywords used, errors.
Analysis is automatic thanks to specific software that does all the statistics (there is a list of web log analyzers in Wikipedia organized by open source, proprietary, mixed or hosted).


An Example: The Ename Provincial Archaeological Museum

http://www.enamecenter.org/en
Archaeological interpretation centre opened in 1998: museum and site with AR. Three different exhibits: “Timeline”, “The feast of a thousand years”, “Archaeolabo”.
The photo in the header is taken in the museum in Ename.

Research goals:

Usefulness of 3 different kinds of interactivity.
Compare social and spatial use of exhibits.

Research methodology:

Comparison of observation and interviews.
Observation: Video-surveillance system, adjusted sheet (individual / social behaviour, learning, usability).
Interviews with visitors: demographic data, heritage and computer experience; preferences about presentation methods; comparative specific questions about the exhibits.
Interviews with guides and teachers (visit and usefulness of ICT).

Results:

Spatial relationship VR / rest of exhibits: perceived as a final complement; but attraction power depends on group size and gender.
People do not use the instructions. Training is done randomly -> application is underused.
PC paradigm -> interaction with the interface was individual (man/children) / the rest of the group (woman) watched the big screen.
Useful for guided tours (one user, images as a basic illustration).
Comparison of interactivities: “VR suitable to learn about the past” (images need to be complemented with verbal discourse).

Lessons learned:

Video-surveillance system was the most suitable because
• Observer was not seen (ethical & privacy issues).
• Many visitors could be tracked at a time.
• Timing recording was easier.
• Unfortunately, images could not be saved -> no detailed analysis.
Importance of contrasting information from both methods (i.e. with regard to usability, answers to questionnaires could not be trusted).
Standardization of observations allows statistical treatment. But need to work with big samples to go beyond the pure recording of events and see trends.
Importance of human mediators: need to take them into account (observation, interview).


Conclusion and tips

We are not interested in purely quantitative success but in providing good informal learning (“edutainment”) experiences. Learning is not acquisition or memorization of knowledge but is more related to a general awareness/comprehension linked to relevance (previous interests and knowledge) and emotional/sensorial impact.

The time spent is not only a function of the activity proposed (that must try to accommodate diversity of audiences) but of the visitor’s competence (previous knowledge and skills) and now, also of what has been called e-literacy (capacity to use ICT as learning tool).

Importance of “triangulation”: comparison/contrasting between information from both methods in order to verify hypotheses and explain results (if observation and questionnaire sheets are correlated).

Working with questionnaires and interviews:

Try always to find a compromise between how open are your questions and the treatment needed later in order to prepare your database for analysis.
Introduce yourself by telling that it will not take long and by briefly explaining the goals and
how useful their contribution will be.
Never start with demography (some visitors feel uncomfortable and will refuse to answer).
Ask affordable questions: do not ask visitors to think like archaeologists or museologists (ask what they liked or disliked; what would they have liked to find and was not there; what would they like to do or find in such a CH setting).


Laia Pujol-Tost, PhD is a Project Officer at the The Acropolis Museum, GR

Sources: 

Borun, M. and Korn, R. (1999). Introduction to museum evaluation. Washington DC,American Association of Museums.

Diamond, J. (1999). Practical evaluation guide. Tools for museums and other informal educational settings. Walnut Creek, Altamira Press.

Fink, A. (1995). The survey handbook. Thousand Oaks, SAGE Publications.

Country: 
Era(s):