While scanning the recently published POSTnotes in the course of writing this rather negative post, I came across one called "Lessons from History" [pdf], which opens with:
That's great to hear from a department primarily concerned with new developments in science and technology. They note (among other things) that history provides new (or old) perspectives on current problems and provides key examples of past successes and failures. Also noted is that although historians are often keen to engage with policy makers, there is currently little use of their research and little opportunity for them to get involved in policy development (with some exceptions).
In the past decade, the government has repeatedly emphasised the importance of taking an “evidence based” approach to policy-making ... However, despite increasing use of evidence from the natural and social sciences, evidence from humanities disciplines such as history is not widely used. This POSTnote considers how history could help to inform decisions on key scientific and technological policy issues.
Another paragraph says too great an emphasis on evidence-based policy making can bring problems (evidence can be patchy, knowledge changes over time, experts interpret evidence in different ways etc...).
This has some similarities with a paper I'm reading at the moment by Jake Chapman (not the artist), and published by Demos on Systems Thinking (be warned, the wiki article looks pretty opaque, but the Demos report is very readable) and its application to policy making.
The basic idea is that policy making is too mechanistic ("machinery of government", "policy levers" etc. — although I think that's a bit rich from someone who advocates a "Systems Thinking" approach) and reductionist. An example of this tendency in public services is to hear talk of delivering healthcare, when in fact all public services require the "customer" to be an actively engaged citizen (maybe including changing our behaviour).
Chapman also criticises the evidence-based policy approach for its lack of contextual understanding:
- Evidence from one particular study is used to justify decisions in completely different circumstances
- Its use assumes linear cause and effect relationships
- The quantitative evidence on which decisions are based only measures outputs, concealing other variables
One of the basic points of this approach (I'm only on page 35 of 94) is that reductionist thinking will break a problem down into simple enough chunks to be analysed, and then when each chunk is understood, the entire system is reconstructed and an understanding of the whole is based on an understanding of each component.
Systems Thinking says that in some cases this causes problems, as it's the interconnections between the components that are critical to understand. The System Thinking alternative is to abstract "up a level" which still simplifies your understanding of the problem because the detail is lost, but retains the important interconnections.
Early days, but if Systems Thinking proves helpful to the Design & Behaviour project, I'll cover this again on the blog.
It would be great to hear of any experience any of you might have in putting this sort of approach to work.