In February 2006, the Tavistock Institute was commissioned by the then Office of Science and Technology (now the Department for Business, Innovation and Skills, BIS), in association with the Economic and Social Research Council (ESRC), to design, pilot and then further refine an evaluation framework for use in evaluating Science and Society initiatives.
ContextScience in society initiatives encompass a wide variety of programmes and projects that work to integrate science and society, from large-scale awareness raising campaigns (such as ‘Science Week’), to targeted interventions focused on specific accessibility issues such as gender balance and low representation of black and minority ethnic communities in science professions.They reflect a number of policy agendas and actions – such as the ‘Science and Innovation Investment Framework – Next Steps 2004-2014’ and the ‘Science, Technology, Engineering and Mathematics (STEM) Report’ – which in turn highlight a number of issues.These include concerns about the UK lagging behind in global research and technology (RTD) investment; a decline in the numbers of students studying Science Technology Engineering and Maths (STEM) subjects; a perception that the UK needs to improve its science, technology and engineering skills base; and the recognition that science can play a major role in contributing to improved quality of life and in promoting social inclusion.
ObjectivesAgainst this background, the aim of this project was to produce an evaluation framework for evaluating ‘science in society’ initiatives. Developing a framework was timely because:
- Although there are a large number of programmes and initiatives aimed at raising public awareness of science and supporting greater participation in STEM education and careers, there was little inter-connection and integration across these different actions.
- There has been significant investment in Science and Society programmes and initiatives.
- The evidence based on the outcomes and impacts of this investment, and on ‘what works’, is poorly developed.
- There is no systematic ‘evaluation culture’ associated with Science and Society programmes and initiatives. Evaluation practices vary in quantity and quality, and evaluation is not adequately grounded across the board in robust, rigorous concepts, models and methods.
MethodologyThe project was undertaken in three stages:
- Production of an evaluation framework for evaluating science in society initiatives;
- Piloting the framework in a real-world evaluation of a science in society initiative; and
- Revising the Framework in the light of the pilot results.