In South Asia, the Middle East and North Africa.
Earlier this year we described our work with partner IpsosMORI for the British Council to support them as they meet the growing demand for evidence of the effectiveness and value of their work.
Since then, we have travelled to three geographic regions (South Asia, the Middle East and North Africa; and in the UK working with colleagues from wider Europe) delivering six training courses, to 138 participants from over 15 countries and with it, we hope, lots of good ‘stuff’; and there is more to come. We have heard about varied, complex projects and programmes from small crafts initiatives to large-scale, multi-country and often innovative programmes many of which address multiple issues in conflict-affected areas. British Council programmes may have several funders (FCO, DfID and local governments), who sometimes want to know about the contribution of their specific funds to change adding layers to unpick when examining effectiveness and value. Talking about evidence and evaluation is rarely simple and in the British Council context, raises interesting questions.
As evaluation specialists, we know that describing our outputs is not enough to tell the full story and must ask the ‘so what’ question of our work too! In order to understand the relevance and impact of the training, we asked participants (on a daily basis as well as at the end of the course), Regional Evaluation Advisors and facilitators to provide systematic feedback which can contribute to refining and ensuring the content meets the needs of people delivering programmes, often also commissioning and managing evaluations.
So, what do participants say they have gained so far? They described lightbulb moments of understanding the importance of having a Theory of Change in place as well as embedding evaluation in programme design (which had often been seen as an externally imposed demand); of programme intensity, scale and scope; and of identifying, if possible, causal linkages within the programme. Understanding the difference between logic models and Theory of Change and especially the importance of articulating assumptions and the ‘theory behind the Theory of Change’ were critical to how they would now review or design programmes. Providing a ‘safe space’ with a positive, constructive atmosphere; and a collaborative mix of participants was valued by many where there are few opportunities to come together. There was also considerable appetite for more in-depth training on specific subjects.
Across the board, learning about Theory of Change, carrying out evaluability assessments (is there anything to evaluate? is this the right time?) and constructing Terms of Reference in order to commission and hopefully receive high-quality evaluations were the three critical take-aways. The intention was never to create evaluators but to offer an opportunity to develop programme designs which may well then lead to useful and useable evaluations. So far, so good.
The training concludes early autumn and soon after, we will be asking the next of our ‘so what’ questions. What did participants do post-training in relation to cascading evaluation core concepts, embedding evaluation-thinking in programmes and generating evidence to help them and the British Council understand their impact?
For more information please contact Georgie Parry-Crooke, Principal Researcher/ Consultant, Tavistock Institute of Human Relations.