Responsible Autonomy in Artificial Intelligence
As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems.
In a new paper DDfV Executive Director Virginia describes leading ethics theories and proposes alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.
The paper was presented in the AI and Autonomy track of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) and included in the conference proceedings.