Engaging citizens in the ethical use of AI for automated decision-making
The RSA is convening a citizens’ jury to deliberate on the ethical use of AI, and in particular, its use to help make decisions. In our first report, we make the case for engaging citizens in the ethics of AI and share a snapshot of public attitudes towards AI and automated decision-making.
Our new report, launched today, argues that citizen voice must be embedded in ethical AI. In practice, this means initiating a public dialogue, so that when it comes to contentious uses of AI, the public’s views, and crucially, their values can help steer governance in the best interests of society.
The RSA’s Forum for Ethical AI is choosing to apply a process of citizen deliberation to explore the rise of automated decision systems. These systems have been characterised as ‘low-hanging fruit’ for government and we anticipate more efforts to embed them in future.
Automated decision systems refer to the computer systems that either inform or make a decision on a course of action to pursue about an individual or business. Automated decision systems do not always use AI, but increasingly draw on the technology as machine learning algorithms can substantially improve the accuracy of predictions. These systems have been used in the private sector for years (for example, to inform decisions about granting loans and managing recruitment and retention of staff), and now many public bodies in the UK are exploring and experimenting with their use to make decisions regarding planning and managing new infrastructure; reducing tax fraud; rating the performance of schools and hospitals; deploying policing resources, and minimising the risk of reoffending.
Based on the results of a survey we carried out in partnership with YouGov as well as our own research into the growing use of automated decision systems, we propose three key issues that are particularly appropriate for public deliberation. We imagine the deliberation will raise a number of ethical questions including, but not limited to, ownership over data and intellectual property, privacy, agency, accountability and fairness.
These ethical issues being surfaced by AI may ultimately lead to enacting new laws or policies, but they are also the reason why we should expect organisations and institutions (in both the private and public sectors) to fundamentally change the way they operate, engage with, and are accountable to citizens.