How the research field of Ethics & AI approaches this problem and their limits
Introduction
There has been a significant shift from designing and publishing ethical AI principles to introducing technical tools and procedural methods to translate principles into practice in the ‘Ethics and AI’ community. This shift was triggered by, among others, the work of Jobin et al. (2019) and a subsequent paper from Ryan et al. (2020). Both consolidated the various ethical principles published in 91 documents, or 84 in the case of Jobin et al. (2019), and revealed a global convergence towards a limited number of principles, including transparency, justice and fairness, and non-maleficence. This loose consensus is an important starting point since it condensed a highly complex problem into a small number of elements that are shared and, thus, committed to by essential stakeholders. This has improved the comprehensibility of the ‘Ethics and AI’ research field and opened up the discussion to the broader public, as it addresses their concerns and identifies priority issues on which to focus . However, principles alone do not guide practitioners (Whittlestone et al., 2019). This is mainly because principles are, first, abstract and high level by design to cover all industries and possible use cases; second, frequently in conflict since AI systems have to optimise not only one but multiple objectives simultaneously to be useful in practice; and, last, lack clarity because of their complex and context-specific nature. Therefore, while a consensus has been established on principles, which makes the challenge of designing an ethical AI system manageable, further methods are required for operationalisation.
First Approach: Technical Toolkits
There are two major theoretical approaches to operationalising principles. The first is an engineering-focused problem-solving exercise to find a technical implementation by quantifying the ethical principles. Consequently, various technical tools have been developed to provide solutions for the different ethical principles (Ayling et al., 2022). These are oftentimes organised in toolkits, such as the IBM Fairness 360 toolkit, which provides tools to detect, understand, and, thus, mitigate algorithmic biases (Bellamy et al., 2019). This removes implementation barriers since toolkits are widely accessible and reliable through continuous updates (Lee et al., 2021). However, the removal of implementation barriers could also promote a misguided confidence in the resulting overall ethical alignment since, first, technical methods are characterised by a narrow scope and are, thus, only useful in specific situations; second, the correct implementation requires a deep understanding of the underlying assumptions, which is challenging even for experts in the field; and, last, technical tools provide no support in situations where one principle conflicts with another and tradeoffs are necessary (Lee et al., 2021). Consequently, the translation of principles into practice is arguably not achievable by the use of narrow technical solutions only, but has to be complemented by procedural methodologies.
Second Approach: Procedural Methodologies
The second emerging approach, procedural methodologies, aims to translate principles into practice by leveraging existing methods, which were not explicitly designed for the ‘Ethics and AI’ field, and adapting them to consider ethical AI principles. On the one hand, this is achieved with the use of proactive approaches, such as software development methods, to address and, thus, implement ethical considerations during the development process (Vakkuri et al., 2021) by, for example, translating ethical principles into engineering requirements. These approaches proactively prevent harm and increase the ethical awareness of developers (Vakkuri et al., 2021). On the other hand, reactive approaches that adapt existing governance methods, such as auditing and certification to address ethical considerations after system development, have emerged. These approaches are accountability measures and can increase transparency due to their documentation requirements (Raji et al., 2020), while also reducing information asymmetry (Cihon et al., 2021). However, even though the landscape of procedural methdologies, both proactive and reactive, is increasing, the methods have not been adopted by developer teams (Siqueira de Cerqueira et al., 2022), due to a disconnect between what developer teams find useful and the currently available procedural solutions.