The problem with artificial intelligence and how it affects everyone
Introduction
One of the most central elements in Western societies is the right of every individual to develop in a free and equal manner (United Nations, 1948, 1986). This right is without restrictions, and people can expect fair treatment, whether it involves university admission or criminal sentencing. In the past, only people had the decision-making authority, and processes have been implemented to ensure the realisation of this fundamental right. However, humans are not without bias, and some decisions have been found to be unjust (Lepri et al., 2018). Claiming the evidence-based nature and economic benefits of computer systems, we have seen increasing use of these algorithms in decision-making processes, entirely or partly, depending on the specific area (De Fine Licht et al., 2020). This is especially true for artificial intelligence (AI) systems, a subset of computer systems with the ability to “learn from input or past data” (Gupta et al., 2021, p. 1319). Thereby, they are able to handle complex inputs and are, thus, applicable to a wide range of areas.
Real-World Problem
However, the claim that AI systems are “’evidence-based’ by no means ensures that it will lead to accurate, reliable, or fair decisions” (Barocas, Hardt, & Narayanan, 2019, p. 12). Recent practical developments have revealed that applying AI algorithms in the real world, with all its exceptions and edge cases, can lead to significant social challenges, such as discrimination, power asymmetry, and opacity (Lepri et al., 2018). The reasons for these challenges are certainly multi-dimensional. However, poorly designed algorithms and biased data sets (Zetzsche et al., 2021), which are difficult to detect before deployment and affect billions of people worldwide, undoubtedly contribute to the challenges (Raji et al., 2020). The alleged discrimination of women by the Apple credit card in late 2019 is only one example of this (BBC, 2019).
Link to Current Research Gap
These real-world problems can be traced back to a current research gap in the field of ‘Ethics and AI’. In the past, important stakeholders in this field, such as scientists, companies, and governments, mainly focused on conceptualising and publishing ethical principles. Ethical principles in the context of AI are codifications of current issues and values, such as transparency or privacy, aimed at guiding the development of AI applications by targeting the decision makers to implement them. Up to 2020, 91 papers that state the principles of individual organisations have been published (Ryan et al., 2020). Subsequent analyses have revealed an emerging consensus regarding the most frequently stated principles, ranging from five (Jobin et al., 2019) to 11 (Ryan et al., 2020) principles. This consensus is an important starting point since it indicates that stakeholders of various disciplines have agreed on a limited number of central elements, which makes this complex problem of mitigating ethical issues manageable. However, principles are abstract by design and, thus, provide no support for AI practitioners on how to implement them or how to deal with situations, in which principles are conflicting. This has resulted in a significant research gap, namely translating between principles and practice (Kazim et al., 2021; Whittlestone et al., 2019), and a subsequent call for the design of technical tools and procedural methodologies to close this gap (Morley, Floridi, et al., 2021).