This was the question raised last night at the RSA, where Encyclopaedia Britannica were sponsoring a talk. Their Managing Director, Ian Grant, emphasised the difference between ‘search’ (a list of results) and ‘research’ (a broader and deeper process of inquiry).
The debate featured a spat between me and Professor Stephen Heppell, one of the world’s leading commentators on learning and technology. Stephen is profoundly critical of the way Government is running our schools. Indeed, he asserted that every example he had seen of innovation and experimentation in schools had been a success, if for no other reason than that it couldn’t possibly be worse than what it replaced.
I have a great deal of sympathy for Stephen’s analysis and even more for his progressive vision for schooling. We fell out when I asked how a more devolved, innovative, child centred approach could ensure success in ‘average’ schools.
My argument was that while visionaries like Stephen tell inspirational stories about what is possible when great schools and teachers aim higher, policy makers will always be obsessed with how systems ensure steady improvement at the average and action to tackle under performance.
I am all too willing to recognise that the current structure of school and student appraisal could be improved, but that doesn’t mean we don’t need a system. The goal is systems that are light touch, that encourage local adaptation (as with the RSA’s Opening Minds curriculum framework) and that provide useful information to practitioners and students as well as to regulators.
The themes of this discussion were picked up again this morning in the debate about the Government’s plan to give doctors an annual competence test.
I remember from my own time in Whitehall shocking statistics showing massive variations in costs, treatment rates, and performance of hospitals, departments and primary care practices. So I support the initiative but – returning to yesterday’s theme – it is important to avoid the obvious dangers in a policy like this.
The first is over-regulation leading to a loss of autonomy and blind conformism – this is what some critics mean when they warn of the danger of ‘defensive medicine’
This is certainly what seems to have happened to much teaching practice over the last decade.
The second is a system which is easy to manipulate, leading to doctors being rated not on their actual performance but on their ability to do well in the competence test. This is what some local authorities would say has happened over time with the star rating system for council services.
This second problem should be seen as an endemic weakness reflecting the powerful impact of Goodhart’s Law which states that the relationship between two performance variables will start to disappear as soon as one is used at a proxy for the other.
The oft-cited example here is that the relationship between academic excellence and having articles in refereed journals started became weaker as soon as the Research Assessment Exercise used the number of articles as the basis for scoring academic performance.
Not only did lots of Journals of questionable quality spring up but academics tended to focus on low value, specialist, incremental research, just about good enough for a journal, rather than bigger bolder more accessible work that took longer to pay off and with a higher chance of failure.
So, however good the new system for doctors it will need over time to be continually reformed as these various policy tendencies take effect.