How often do we ask ourselves: is what I’m doing truly working? It’s a simple question and one which makes intuitive sense to ask if we ever intend to learn from our mistakes and improve the impact of our work. And yet despite this it is something we tend to ignore time and again. Both at an individual and institutional level, many of us are reticent to the idea of evaluating the policies and practices that shape the effectiveness of our public services and which dictate their value for money. Indeed, these days it is quite rare to hear or see the term ‘evidence-based policymaking’.
For youth services at least, Project Oracle is attempting to change all of this. The initiative was set up by the Mayor of London’s office to help smaller organisations that are working with young people evaluate their programmes using ‘rigorous and internationally recognised’ standards. The way it works is that once organisations have signed up they are provided with free advice and support on how to assess their work, and are guided through different ‘levels’ of evaluation that gradually become more sophisticated. The Project has the added benefit of creating a sound structure for collecting and disseminating cross-comparable data that everybody in the sector will find meaningful – at least in the capital.
While attending a recent seminar at NESTA to learn more about the project, I heard a number of interesting points being raised about the obstacles to undertaking evaluation schemes and the subsequent difficulties of making use of the data once it gets collected. Many of those attending, for instance, said they feel as though there are cultural differences between people working in the voluntary sector and those in the academic/policy world; academics may insist on gathering quantitative data but service practitioners find anecdotal evidence far more useful. Another key issue raised was that although many funders are willing to pay for the evaluation of an organisation’s operations, only ever a handful actually commit to providing the resources for the implementation of recommendations that emerge from the research.
While all of this is no doubt interesting and useful, it felt as though the conversation side-stepped one of the biggest impediments to the initiation, the quality and the utility of evaluation schemes. This is the simple fact that many of us have difficulty in accepting defeat and apportioning fault. Whether a frontline practitioner or a senior manager, taking part in an evaluation process may open up a Pandora’s Box of knock-effects, which at best may lead to the radical restructuring of the organisation and at worst the termination of projects and ultimately job losses. Vested interests aside, there is also the challenge to service users and colleagues who may find themselves in the uncomfortable position of saying, albeit honestly, that someone’s efforts and practices are ineffective. It is one thing to acknowledge failings in our own work, but to highlight the caveats in someone else’s takes some courage.
The reason why this is doubly important is because there has rarely been a greater time when we have had to identify failure in our work and be open to new approaches. Whereas in previous years public service innovation was characterised by the sharing and adoption of universally recognised ‘best-practice’ approaches at home and abroad, the next stage is arguably going to be an era of localised, radical experimentation. In other words, it is likely that organisations providing public services will be encouraged to become their own 'innovation labs', testing different methods and practices until they land on the thing that works best for them. In practice, this could mean a school experimenting with different ways of teaching maths, or it could mean a GP consortium trying out innovative new health treatments with their patients.
Wherever this new wave of experimentation and rapid evaluation takes places, it will demand that service users, practitioners and those in senior management have a certain type of mindset which is comfortable with ambiguity and not afraid of failure.
It could be said that in the future there will be two sides to the coin when it comes to public service transformation. The first is that success depends on learning what works and adopting these approaches; the second is that we learn what doesn’t and ensure that these styles gracefully bow out. To date, it seems we have focused too much on the former at the expense of the latter.