The Humanising Machine Intelligence (HMI) grand challenge project is contributing to the design and adoption of sociotechnical systems necessary for democratically legitimate AI.
Knitting together insights from computer science, law, philosophy, political science, and sociology, HMI is shaping the debate around government regulation of AI systems; helping industry practitioners develop AI systems that comply with, and exceed, those regulatory standards; and building an international research community that supports those two goals.
HMI research centres around five themes: automating governance, personalisation, algorithmic ethics, human-AI interaction and philosophy of data science and AI.
As data and AI are increasingly used-by states and digital platforms-to exercise power over us, what does it mean for that power to be used justly? How can we design socio-technical systems that enable legitimate AI, where power is exercised only by those with standing to do so and is subject to standards of due process and accountability?
The most sophisticated AI systems in the world ensure that your every moment online is tailored to you: personalised media, news, ads, prices. What are the consequences for democratic societies? Can we achieve serendipitous recommendations without manipulating users, or unduly invading their privacy? Can we ensure that content on social media is surfaced that informs, edifies and educates, rather than undermining public discourse?
AI systems can increasingly make significant state changes without intervening human influence. We need to design these systems to take our values into account. But which values? And how can we translate them into algorithmic form? What are the fundamental complexity constraints on algorithmic ethics? Can we design robotic systems that can emulate compassion?
We fall into predictable errors when we interact with AI; and over time, those interactions change us. What cognitive and other biases should designers of AI systems account for? How does use of automated systems lead us to make faulty attributions of responsibility? And how do we avoid the risks associated with outsourcing morally significant decisions to AI systems?
Philosophy of Data science and AI
We are helping to inaugurate a new field of the Philosophy of Data Science and AI. Its central idea is to apply the methods of the philosophy of the sciences to the topic of data science and AI. This new field is undertaken by philosophers who are deeply immersed in current AI research. Like work in other areas of philosophy of the sciences, work in this subfield has the potential to both illuminate data science and AI for computer scientists, and to make first-order philosophical advances.