Loading...

A green paper for the role of evaluation in the 21st Century?

A green paper for the role of evaluation in the 21st Century?

What would a green paper for the role of evaluation in the 21st century look like?

Posted

3 March 2008

What would a green paper for the role of evaluation in the 21st century look like?

This was the topic of one of many sessions at a Tavistock Institute event. Among those attending were members of the international research and evaluation communities, and, it provided a timely opportunity for self reflection with an anticipation of challenges ahead.

Evaluation has formed a major focus of work at the Institute for the last 20 of its 60 years, with the diverse traditions of the Institute, such as action research, systems theory and psychodynamics, helping to develop a distinct, multidisciplinary and collaborative approach. However, recent shifts in both the market and public policy contexts in which evaluation activities take place (both nationally and internationally) has meant that the Institute is having to question its approach, and anticipate what might be the future expectations of society in terms of evaluation requirements.

Participants in the workshop considering the potential for a ‘green paper’ for evaluation had a number of inputs throughout the day to consider: Peter Johnston, Head of Evaluation and Monitoring at the Information Society and Media DG at the European Commission, shared his perspective on key challenges for evaluation at the Commission and offered glimpses of the wider public policy context.

Eliot Stern, editor of ‘Evaluation’, later reflected on the history and contribution of the journal and charted its development in parallel to that of the wider evaluation community, moving from a specialised and niche practice to the crowded market place in which today’s evaluators operate.

The green paper discussion was introduced by Tavistock Institute Associate Dr Joe Cullen with the support of researcher and consultant Kerstin Junge. Participants were challenged to question the structure and accepted authority of current evaluation approaches and invited to suggest an alternative offer. Introducing the topic, Dr Cullen questioned the extent to which traditional models of evaluation, which have often relied on an illusion of consensus about the needs, values and purposes of the programmes or organisations under scrutiny, were ‘fit for purpose’ in the rapidly evolving and increasingly diverse societies of the 21st century.

The level of ideological, motivational and cultural diversity among the different stakeholders in policies and programmes may be masked by such traditional evaluation approaches which may also fail to capture the extent to which programmes themselves and their objectives shift from their original conception over a relatively short period of time. If evaluators aspire to work in truly formative ways, then programmes and their associated evaluations will become ‘moving targets’ which evaluators must find ways to take account of and respond to.

Perhaps predictably, the workshop provoked more questions and challenges than answers. Outlined below are just a few of these issues which workshop participants started to think around on the day.

One key challenge for evaluators is the demands which arise from a changing public policy context, and in particular, the increasing alignment of evaluation with short term spending review cycles. Another dimension was the growing demand within the public sector for performance management, accountability and quantification- making learning oriented evaluation strategies increasingly hard to undertake.

These factors may lead to a growing gulf between client and evaluator’s perspectives. Time pressures on clients can mean they are unable to wait for the outcomes of evaluation before implementing changes or devising policy. Evaluations may fail to focus upon the most relevant policy priorities or, frequently priorities change over the duration of an evaluation. Evaluation may also be used, one participant suggested, as an avoidance or ‘defence’ against action, complicit in the failure to catalyse change (so often attributed by evaluators to forces outside their control). These views were juxtaposed against the familiar frustrations of evaluators working to short time frames, with reduced opportunities for innovation, and witnessing the failure of clients to utilise the learning from evaluations or worse still not taking the time to read them.

With both evaluation clients and evaluators present in the workshop, the discussion served as a reminder that there is a need to recognise and reconcile these different values- and to find ways to adapt and develop working practices to address both the needs of public sector management and to facilitate the use and uptake of learning. Returning then to the original task of the workshop the question seemed to be not ‘what is the role of evaluation In the 21st century?’ but rather ‘how can evaluation’s potential role be more fully realised in the 21st century?’ Evaluators and clients need to address key challenges such as how can we continue to embed learning within processes designed to serve the needs of accountability? How can we maintain rigour and impartiality amidst the politicisation of evaluation? And how (or if) evaluation can be used to sustain a reflective dynamic within the programmes or organisations it works with?

One response voiced during the workshop was an aspiration for closer collaboration and dialogue between clients, theoreticians and evaluators, perhaps helping clients to understand the broader opportunities served by innovative or longer term evaluation and ensuring qualitative evaluators recognise the value of ‘hard’ data to their clients. Another suggestion was that there is a need to incorporate processes into evaluation strategies which, rather than aiming to inspire action, necessitate it by their very design. Examples of such processes included the use action learning cycles and less linear models of evaluation. However, the challenge remains as to how to encourage clients to buy into these types of methods.A number of points were also raised about how to ensure that the products of evaluation were made accessible to a range of stakeholders. As the funding for evaluation is finite, thought needs to be given, well before the end of the evaluator’s involvement, as to what will be left with clients and participants in the programme or project, after the evaluation is over and the evaluator goes home. Too often, despite innovations in technology, the dominant product of evaluation continues to take the form of a standard report which offers little alternative to passive engagement and may even become a block on learning, at odds with calls to catalyse reflective practice.

It was noted that the products of evaluation processes rarely match the level of creativity and sophistication of the complex evaluation processes from which they stemmed. Many recognised a need to develop products which increased the accessibility and utilisation of evaluation findings while addressing the challenge of how to represent multiple and shifting realities and values. In trying to balance the needs of accountability with those of learning, evaluators and clients both need to consider the possibility of the end result of the evaluation being a product or products, which can straddle the two more effectively than a written report. It was suggested that answers may lie in explorations of the opportunities of new and ever more interactive technologies and social networking and the development of other (creative?) reporting formats.

It was clear that, in a relatively short session, it was hard to come up with a clear conception of what a green paper on evaluation for the 21st century would look like. However, some of the issues that such a paper could usefully address had been identified. These include a clearer statement about the needs for, and exploration of, processes to ensure that effective communication between clients, practitioners and theoreticians takes place – both in relation to the design and delivery of evaluation strategies. Also the need to set out clear expectations about, and explore new approaches towards, the utilisation and potential of evaluation, not only in terms of accountability, but also capturing learning, both for present and for future interventions. There was clearly an appetite amongst the participants in the group for further exploration of new approaches to reporting and disseminating evaluation results that maximised the potential of all the communication channels that are becoming available in the 21st century.

Subscribe to our newsletter

The Tavistock Institute of Human Relations | 63 Gee Street, London, EC1V 3RS
hello@tavinstitute.org | +44 20 7417 0407
Charity No.209706 | Design & build by Modern Activity
Research integrity statement | Terms & Privacy | Company information | Accessibility