The global challenges around governing AI reveal our existing institutions as unequal to the problems of our day. It is imperative that these new technologies are shaped and deployed in alignment with human values and aspirations - determined not just by a narrow elite or a small pool of people creating the technology, but in an inclusive, democratic, and deliberative way.
The core belief and insight is that a wide diversity of everyday people — selected by sortition (lottery) to be broadly representative of all walks of life, deliberating in conditions that enable grappling with complexity and finding super majority common ground — can and should shape the ongoing development and deployment of AI technologies and their impact on society.
Citizens’ Assemblies, which embody these principles of sortition, deliberation, and participation, are proven methods for expressing an informed, coherent will on complex issues that transcend the zero-sum adversarial dynamics of typical politics. As of November 2021, the OECD has counted almost 600 citizens’ assemblies for public decision making around the world, addressing issues from drug policy reform to biodiversity loss, urban planning decisions, climate change, infrastructure investment, abortion and more.
The development of new technologies that rely on AI is increasing at an exponential pace, raising political and social questions about what kind of society we want to live in, and who gets a say in shaping these futures. This brings the enormous potential for harm as well as societal benefit, with systemic impacts on many aspects of our everyday lives - from education to work, the economy, media and the information ecosystem, policing and justice, healthcare, and more – as well as impacts on the international security environment in an increasingly multipolar context.
While there is increasing consensus that AI is and will continue to have substantial societal impact, there is a lack of consensus about what exactly those harms and benefits might be, who will be impacted by them, and which are most important (ethics, values alignment, jobs, existential threats, disinformation, etc).
How should and could AI be regulated? And the societal implications be navigated? There are many open questions: upon which values and principles it should be based, who should decide those criteria and how, by what type of entity, at what scale, and for what reasons, amongst others. Various propositions have been offered: to use existing regulators; create new national ones; empower existing national ones; learn from previous regulatory systems; establish new international bodies or agencies.
What is clear is that these are not just technical questions. They are incredibly political and social in nature, and thus need to be subject to democratic deliberation. Who decides, and how those decisions are taken, about the impact of AI on society are the questions we're interested in. Everybody is impacted by these changes, yet we can't all be involved in every decision all the time, and the pace of change is also fast. Sortition allows for the rotation of decision making responsibility and privilege. Creating deliberative space is perhaps an intentional way to slow things down just enough to get to decisions that can attain legitimacy and take safety into account as well.
We believe that tech company owners and workers, experts or politicians alone cannot be responsible for answering these inherently political and societal dilemmas. The emphasis needs to be on both the representativeness and the deliberative quality of the engagements, ensuring that the process gives people agency and dignity, leverages collective intelligence, and can result in finding common ground on a clear political mandate.
We do not have an active project on these questions at the moment, but if you are interested in exploring potential collaboration or support for such an initiative, please reach out.