As part of the MA in Art Gallery and Museum Studies course, I worked with the delightful group at Hulme Community Garden Centre to help ensure it evaluates its ample offerings in a way that includes all the communities it serves. Founded in 1999, HCGC envisions a greener Manchester, a healthier community, and a regenerated Hulme. The staff’s passions run deep. As they are fond of saying, they’re here to protect “anything that grows, wriggles, crawls, or flies.”

HCGC uses an evaluation tool, essentially a list of questions and a 1-10 ranking, to gather input from volunteers and people who attend their supported sessions, events, and other activities. The data they collect from these surveys is meant to measure the social value HCGC provides to the community: if people who visit HCGC eat more healthily, compost more, or form greener habits, then there is a measurable financial impact on Manchester and the UK. For example, healthier people visit the doctor less, reducing NHS costs, and responsible consumption activities reduce the burden on rubbish collection. The questions, however, were not written in a clear enough way to get good results when posed to the Marrow Barrows, individuals who attend supported gardening sessions at HCGC meant for people with a disability or learning difficulties. This was a problem: the surveys were leaving out a major portion of the HCGC community.

Members of the Marrow Barrows (Image Courtesy Hulme Community Garden Centre)

After spending a lovely morning planting runner beans and getting to know the Marrow Barrows, I decided to test a few approaches. Working with the HCGC volunteer coordinator and seeking guidance from consultant Sally Fort, I drafted a new set of questions. Once I had my first draft, I headed outside on a rare sunny day in Manchester to test how the Marrow Barrows responded. My idea involved setting up a “rig” of two sticks, one bearing positive “yes” iconography and one bearing negative “no” iconography, with a string tied between them and a sliding indicator hung from the string. The “rig” could be stuck in the dirt where the person being questioned was working, and they could respond to a question by sliding the indicator along the string. Testing this approach brought to light many problems: learning how to use a new way to communicate was frustrating to many of the people I consulted: several of them took one look and essentially said, “no way.” Further, as this evaluation takes place outdoors, the string was too vulnerable to wind and blew around everywhere. Ultimately, the rig was a failure. However, this experience testing questions with Marrow Barrows alongside their carers led me to a critical moment: only by getting the person to tell me details or a story using follow-up questions could I really glean an answer. Using iconography to help explain the questions was a helpful intervention, but it only went so far. This, along with Sally’s input, led me to using a triangulation approach employing input from HCGC staff, carers, and, using narrative to elicit meaningful responses, the Marrow Barrows themselves. This process is still in the testing phase, but once a useful approach is identified, HCGC will use this data to strengthen its database and demonstrate its social impact.

This experience allowed me to understand the absolute necessity for all self-evaluating institutions to be vigilant about their practices. This applies especially to museums; as we have learned over the course of the MA, museums and related funding bodies make high-level decisions based on evaluation and audience feedback, and the importance of designing evaluation systems that faithfully record all voices cannot be overstated. I already have a keen interest in using audience data to inform institutional decision-making, and working at HCGC helped me understand that evaluation must be critical and intersectional, or it doesn’t work at all.