Loading...

Changing the conversation about Evaluation

Changing the conversation about Evaluation

Towards a service with a moral function?

Womb, DCP Exhibiting Season, 2023

As discussions about the role of evaluation shift towards asking what contribution it can play to address social (in)justice, we have been asking ourselves where our evaluation practice sits in this debate.

The world seems to be hurtling from crisis to crisis. COVID-19, cost of living, wars in Europe and further afield, the climate emergency are all shining a light on structural inequalities and creating pressures for significant and fast change. Coinciding with these events, a debate about the role of evaluation has emerged which challenges the ‘traditional’ notions of the evaluator as an ‘objective’ outsider measuring change. Questions about how evaluators can meaningfully assess complex and innovative change initiatives responding to these and other global pressures, and the role of evaluators in potentially perpetuating systemic injustices (e.g. through (unconscious) methodological choices, the way data are interpreted and disseminated) as they do this are now being asked. New types of evaluations, such as Patton’s principles focused evaluation, which works with complexity in making an intervention’s guiding principles the object of evaluation, or equitable evaluation, which sees evaluation as a tool to advance equity, are being coined to help us re-think the value and purpose of evaluation.

At the Institute our core mission is to focus on human relations, and it is from this tradition that we helped build the practice of evaluation in the UK and Europe. We thrive on evaluating the kind of emergent and innovative interventions many might think ‘unevaluable’, but at the centre of all of our evaluation work are the people doing new things to improve people’s lives. So here’s what we have learnt about doing a socially relevant evaluation:

  1. We need to be comfortable accepting that there are large parts of the human systems we’re evaluating we won’t know or understand – at least at the beginning. So we need to choose a way of working that will help us build relationships with system actors and through those gain a deeper understanding of the people and activities we’re evaluating to make fair and meaningful judgements about an intervention’s worth, value and impact. Co-creation approaches are deeply valuable here but can be uncomfortable for both evaluators and commissioners as they challenge conventional notions about who ‘owns’ knowledge and expertise and hence people’s roles in the social system that is being evaluated. Yet, if implemented successfully, evaluators can become highly relevant companions in complex change initiatives, supporting system learning, improvement and change.
  2. So, for this reason- for us a whole systems approach is essential. The key to truly understanding change is to include the different parts of the social system within which an intervention is being implemented in any evaluation work. As evaluators we need to give people a voice in different spaces, for example by using participatory or creative methods. Power dynamics between individuals and institutions are inevitable and need to inform how methods are implemented to ensure data quality. 
  3. Technical evaluation evidence is necessary to help us work out what an intervention has achieved and why.  We love a well-designed experimental evaluation! However, even if the best RCT (Randomised Controlled trials) might tell us ‘what is’, it does not tell us what to do next. Indeed, more generally Covid has perhaps shone a light on the challenges of ‘evidence-informed’ decision-making, really highlighting (if there ever was a need to), that the notion of ‘rational’ or straightforward application of evidence in the process of making choices is deeply flawed or naïve. In much the same way, using evaluation evidence involves trade-offs, often between competing values, and judgement calls. These are inherently normative debates, where facts and values interact. Seen in this light, what does it mean for us as evaluators, to provide knowledge and information that is relevant and useful and for how we see our role? As part of our evaluation practice, we’re creating and facilitating spaces to support a process where people can jointly make sense of, and draw meaning from, the evaluation data through dialogue and reflection and help them work out practical next steps.
  4. This then leads us to the last point: fundamentally, evaluation is about learning. It allows us to do what we often don’t manage to achieve in our day-to-day work: create opportunities for stepping back, reflecting, learning and sense-making, and through these greater self-awareness and further opportunities for continuous improvement.

Taken up in this way, evaluation activities become interventions in themselves, and evaluators need to be aware of the potential impact of their activities. As well as sophisticated technical skills, this requires knowledge of people - and of group behaviour – of the kind that the Tavistock Institute has been exploring for nearly eight decades. It’s not yet part of the mainstream, but the debates we’re currently participating in suggest this is where we might be heading.

This thoughtpiece is part of an active debate in the Institute about the meaning of our evaluation practice in today’s society.  It will be iterated as our discussion progresses. If you would like to contribute, please get in touch with Giorgia Iacopini.

Giorgia Iacopini and Kerstin Junge

Principal Researchers, Evaluators and Consultants

Subscribe to our newsletter

The Tavistock Institute of Human Relations | 63 Gee Street, London, EC1V 3RS
hello@tavinstitute.org | +44 20 7417 0407
Charity No.209706 | Design & build by Modern Activity
Research integrity statement | Terms & Privacy | Company information | Accessibility