What Do We Talk about When We Talk about AI Ethics?
W2R2bWVtYmVyIGlkPSIyMjE0IiBncmlkc3R5bGU9ImZ1bGwiIG9mZnNldD0iMjAiIGl0ZW13aWR0aD0iMjUwIiBzaWRlPSJyaWdodCIgcm91bmRlZD0iIl0=
This month I’ve attended a few meetings around the topic of AI ethics, AI for good, and AI impact on human society and human rights. Great speakers, lots of food for thought. All in all, what strikes me the most is the way AI ethics seems to serve as an overall container for many diverse opinions and topics.
As I see it, depending on the speaker and the context, AI ethics can mean one of the following things:
- Policies concerning regulation of AI R&D activities and AI deployment in societal settings
- Ethical and philosophical thinking on the potential reach of AI and its risks for humankind
- Ethical, or moral, reasoning by machines
Issues related to each of these topics are quite different, as is their impact. Placing all of these issues in the same basket, muddles the discussion and risks the achievement of constructive solutions to any of the topics. It can also contribute to increase the fear of AI from the general public and with it the risk of proliferation of unfundamented, dystopic views.
The most urgent of these topics is probably the first one. In this topic, we see a recent raise of national and trans-national initiatives, at governmental level (e.g. in the UK, France, Canada, the European Parliament, and the US), but also bottom up, starting from the R&D community (e.g. the IEEE initiative on Ethically Aligned Design, the Partnership on AI, the 100 year study on AI, OpenAI, the Foundation for Responsible Robotics just to mention a few).
Regulation can both threaten AI’s progress or further it. A culture of openness and cooperation between technologists, policy-makers and ethicists is necessary in order to ensure that regulations create the incentives to development that benefit both technology development and society. E.g. policies to ensure transparency will lead to novel machine learning techniques where the aim is not algorithm optimization but algorithm explainability. It is hopeful to see that the need for dialog between different parties is increasingly acknowledged by all.
On the other hand, the media has given disproportional attention to the second topic. Dystopic views of a future dominated by our robotic overlords seem to sell well and are backed by a disproportional number of tech-millionaires. However, as Luciano Floridi remarks, even if such future is logically possible it is utterly unlikely, and focus on these issues is actually a distraction from the ‘real’ problems that are already affecting us. Even though the topic has fascinated people for ages, the main risk here is that focusing on possible future risks is basically a distraction from the very real risks that we are facing already: privacy and security, consequences for human labour, algorithmic bias, just to cite a few.
As for the last topic, moral reasoning by machines, even though this is the main focus of my work, it is the most elusive of all. I’ve designed systems that can deliberate between upholding societal norms or rules, and following one’s goals. And have done extensive work on the computational representation and verification. However, true moral reasoning requires not only being able to make ‘moral’ decisions but to reason about the morality of those decisions. And this requires a huge step ahead. Indeed, a computational theory of morally will require a formalisation of the concept of moral responsibility itself, together with clear criteria describing what constitutes a moral agent, and to describe what qualifies as being responsible for something, or to have the free-will to act. The pursuit of this endeavor surely contributes to a much more profound understanding of these notions, and is therefore justifiable. However, given that AI systems are artefacts, or tools, engineered by people, these systems will necessarily imbue our moral principles which invalidates the free-will premise. In which case, the most important contribution of research on moral reasoning by machines is on the effort to make explicit the values and ethical systems behind our designs of these systems.
In fact, AI-based systems are increasingly making decisions on our behalf and determining what we get to access. With hardly any regulation, in opaque ways, and with no way to understand which are the values and requirements behind those decisions. A real danger is then that we are gradually seeing democracy being replaced by technology. Social networking technology increases and maintains in-group cohesion, leading to information bubbles and the diffusion of extremist ideas within communities. Collective responsibility, in which each one takes its part on the activities of a group or society, is being replaced by distributed responsibility, in which each person is less likely to take responsibility for action. In this setting, the notions of what is legally allowed, socially accepted and morally acceptable are no longer aligned. Which is fundamentally the ground for ‘alternative facts’ and why these are accepted by different groups. While it is of the most importance to acknowledge these (unintended) effects of intelligent technology, it is necessary to question whether AI researchers are the most likely arbiters in this question.
I sincerely believe that we have the moral duty to research AI. And we have the moral duty to do it responsibly, ensuring that results benefit all, do not exclude groups or activities, and contribute to a sustainable future.
Note: the title of this article paraphrases the title of Murakami’s memoir “What I talk about when I talk about running”.