The Silicon State - RSA

Blog: The Silicon State

Blog 3 Comments

  • Picture of Adrian Brown FRSA
    Adrian Brown FRSA
  • Fellowship
  • Leadership
  • Technology

Advances in cognitive computing are challenging the traditional logic of devolution, marketisation and privatisation

Has Margaret Thatcher's favourite philosopher finally met his match? Friedrich Hayek famously argued that centralised planning was doomed to fail because a bureaucracy could never gather and interpret sufficient information to make sensible decisions. If attempted, the outcome would always be waste and resource misallocation on a grand scale. Instead, wherever possible the invisible hand of the market should be preferred to the dead hand of the state. 

Hayek won a Nobel Prize for his efforts, but more importantly his thinking went on to influence generations of economists, politicians and policymakers right up to the present day. In the fifty years since he first developed his ideas the policies of devolution, marketisation and privatisation have been embraced by governments around the world, as well as promoted by international bodies such as the International Monetary Fund and the World Bank. 

But Hayek is wrong. 

Interestingly, he wasn't always wrong but something dramatic has happened in the last few years that should give his proponents pause for thought. The rise of big data and advanced analytics now means that Hayek's fundamental contention, that information is overwhelming, is increasingly challengeable and the implications for policymakers will be profound. 

How centralisation can be a winning strategy

A revolution powered by big data and cognitive computing has been heralded in the private sector for several years. Companies like Google and Facebook are famously founded on business models that require the massive collection and interpretation of data at a scale simply unimaginable to Hayek but more traditional industries are being disrupted too. 

For example, real estate companies that oversee large national portfolios of properties in the US have moved away from a decentralised approach in which local managers set rents to centralised datasets and price setting software. The centralised approach consistently outperforms the devolved solution. Similar examples can be found in industries ranging from logistics (where the routes of UPS drivers are determined by software rather than drivers) to retail (where the staffing rotas and stocking schedules in supermarkets are determined by algorithms that predict customer demand). 

Situations that lend themselves particularly well to such an approach are likely to involve repeated decisions, based on diverse information, to achieve measurable outcomes. When this process is handled by a so-called learning algorithm, with the capacity to draw inferences between action and outcome, the result is turbo-charged learning at a level that rapidly outstrips human capabilities. See for example DeepMind's mastery of a classic Atari video game – from absolute beginner to world champion in a matter of hours. 

The public sector is awash with problems that are particularly amenable to cognitive computing given the complexity of public services and the social problems they attempt to address. For example, advanced traffic management systems are already used in countries like Japan to help reduce congestion by automatically controlling traffic lights and speed restrictions. Such applications are relatively uncontroversial but as technology advances it will be increasingly possible to apply similar approaches to interventions in health, education, welfare and justice. This isn't a question of 'if' but rather 'when'. 

The silicon state

The implications of all this for policymakers are many and varied. I'll focus on just three. 

Firstly, the assumption that markets and devolution are preferable to greater central control can be challenged afresh on practical and empirical grounds rather than relying on philosophical or political arguments. For example, it is no longer clear that multiple local commissioners are  better placed to make decisions about public services than a centralised agency tiny_Twitter Similarly, how certain are we that multiple outsourced providers are better placed to aggregate the knowledge required to improve public services than a single in-house provider? 

Secondly, the emphasis on data and outcomes in public services gains a new resonance. Data is the fuel upon which advanced analytics depends and higher quality, richer datasets lead to deeper insights and better decisions. We are no longer limited by the data-absorption capacity of humans or indeed the need for data to be presented in a structured manner tiny_Twitter

Policymakers should therefore redouble their efforts to ensure that as much information as possible about the inputs, activities and outcomes of public services is collected and shared in a way that unleashes innovation and service improvement. 

Thirdly, the moral and privacy issues raised are profound but unavoidable given the march of technology so we must start grappling with them now. Would it be acceptable for my unemployment benefits and sanctions to be determined by an algorithm predicting my propensity to work? Will I always be able to tick the box to say I can opt out and retain complete control of my personal data even if it means less effective, more costly public services? 

In his Nobel acceptance speech Hayek observed that if man is to avoid doing more harm than good he must learn that "he cannot acquire the full knowledge which would make mastery of events possible". The contrary is now increasingly true, and policymakers must quickly learn how to harness the power that such knowledge can offer us all.

 

Adrian Brown is Executive Director of the Centre for Public Impact

Join the discussion

3 Comments

Please login to post a comment or reply

Don't have an account? Click here to register.

  • A very interesting piece.  I certainly agree with the second and third points.

    Much less sure about the first.  In addition to the interesting points Olly Nguyen has raised:

    (1) don't your commercial examples rest on a (broadly) virtuous cycle in which harnessing knowledge about customers to serve them better leads to more satisfied customers and higher prices? In contrast, bureaucracies, even if benign in intent, lack the clarity of the profit-maximising motive: the goals they are trying to achieve are much more complex and (even with much better data) often difficult to measure objectively. Moreover, it is often argued, and my personal experience certainly bears this out, that bureaucracies and politicians don't necessarily pursue the public interest (leaving aside the ambiguities of how it can be defined), but their own or the interests of groups in society who have sharper elbows. 
    (2) government decision-making should typically involve both finding the most efficient and effective mechanism for achieving a determined goal, and decisions about priorities and values.  To take your traffic management example, I am sure technology helps ensure that policy goals about how our streets work get translated more efficiently than previously into how signals work and people move round our streets.  However, can data really take over policy decisions on such matters as the relative priority we give in our streets to private vehicles, public transport, people walking and people cycling? Though data may enable more informed debate than in the past, such matters are the proper sphere of citizen advocacy and politics.  Even more such issues as whether we reduce the DWP budget by cutting tax credits or benefits for better off pensioners!
    (3) In a report published by GovernUp earlier this year (http://tinyurl.com/pw6c4tt) I argued that the case for localism rests both on efficiency and restoring citizen connection with Government and policy.  Even if the efficiency argument is weakened by big data, I doubt the citizen connection one is.  The opinion research indicates people want decisions taken by people closer to them geographically.   They may now choose to manage more of those interactions via social media than face to face contact but, coming back to the traffic example, there is much more of a chance of me engaging in a debate with the council's transport executive member and other local residents via social media than being able to do so with Patrick McLoughlin!

  • Really fascinating...I wonder if it will be a reality and with what effects? Challenges x3: 

    1) are govts equipped to handle big data effectively? Reliance on data that is not well understood echoes of CDS/CDO etc in financial crisis.

    2) on the softer side- can interventions in areas like healthcare/education/justice really be addressed in measurable outcomes? How would we preserve interventions that offer no ostensible improvement in quant outcomes but deliver genuinely felt qual results? Dementia sufferers for example.

    3) assume this was implemented- logically does the data gathered change over time from being heterogeneous and independent (in other words realistic) to that which is generated by a controlled economy. Seems like there could be some circularity in the system. Would this even be a problem?

    Nevertheless would be great to hand local decision makers much greater information - a central repository coordinating, but analysis and decision at local level? 

  • In what way will humanity be responsible for its own future?

    What will be the psychological and social effects?

Related articles