Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges
Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges
Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges
1. Introduction
Artificial intelligence (AI) increasingly permeates every
aspect of our society, from the critical, like healthcare
2018 The Author(s) Published by the Royal Society. All rights reserved.
and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and
2
techniques like machine learning, can enhance economic, social welfare and the exercise of human
rights. The various sectors mentioned can benefit from these new technologies. At the same time,
(1) Ethical governance: focusing on the most pertinent ethical issues raised by AI, covering
issues such as fairness, transparency and privacy (and how to respond when the use of
AI can lead to large-scale discrimination), the allocation of services and goods (the use
of AI by industry, government and companies), and economic displacement (the ethical
response to the disappearance of jobs due to AI-based automation).
(2) Explainability and interpretability: these two concepts are seen as possible mechanisms
to increase algorithmic fairness, transparency and accountability. For example, the idea
of a ‘right to explanation’ of algorithmic decisions is debated in Europe. This right
would entitle individuals to obtain an explanation if an algorithm decides about them
(e.g. refusal of loan application). However, this right is not yet guaranteed. Further, it
remains open how we would construe the ‘ideal algorithmic explanation’ and how these
explanations can be embedded in AI systems.
(3) Ethical auditing: for inscrutable and highly complex algorithmic systems, accountability
mechanisms cannot solely rely on interpretability. Auditing mechanisms are proposed as
possible solutions that examine the inputs and outputs of algorithms for bias and harms,
rather than unpacking how the system functions.
A growing body of the literature covers questions of AI and ethical frameworks [1,6–10], laws
[3,11–14] to govern the impact of AI and robotics [15], technical approaches like algorithmic
impact assessments [16–18], and building trustworthiness through system validation [19]. These
three guiding forces in AI governance (law, ethics and technology) can be complementary [1].
However, the debate on when which approach (or combination of approaches) is most relevant is
3
unresolved, as Nemitz and Pagallo expertly highlight in this issue [13,17].
Across the globe, industry representatives, governments, academics and civil society are
1
AI at Google: Our Principles. https://www.blog.google/technology/ai/ai-principles/.
important questions. First, who sets the agenda for AI governance? Second, what cultural logic
4
is instantiated by that agenda and, third, who benefits from it? Answering these questions is
important because it highlights the risks of letting industry drive the agenda and reveals blind
3. Concluding remarks
The argument presented in this article should not be read as a dismissal of the work done by
industry or the relevance of current ethical, technical solutions and regulatory AI governance
frameworks. Rather, much can be learnt from this ongoing work but only if we carefully assess its
aims, impact and process. It is crucial to remain critical of the underlying aims of AI governance
solutions as well as the (unforeseen) collateral cultural impacts, especially in terms of legitimizing
private-sector led norm development around ethics, standards and regulation. Likewise, we
must remain cognizant of the concerns not, or only partially, covered by phrases like fairness,
accountability and transparency. In focusing on these issues what is not discussed? Are we
assuming that issues around AI and equity, social justice or human rights are automatically caught
by these popular acronyms? Or are these concerns out of scope for the organizations pushing the
agenda? Asking these hard questions matters because these concepts are increasingly making
their way into regulatory initiatives [43] across the globe.
The authors in this special issue expertly engage with these various hard questions. From the
articles, it becomes clear that the authors are unsatisfied with the current state of AI governance.
Nemitz, for instance, argues in favour of fostering a new culture of technology and business
development stooled on the rule of law, human rights and democratic principles [17]. Pagallo
highlights the importance of pragmatism and testing new forms of accountability and liability
through methods of legal experimentation [13]. Veale et al. explore how machine learning models
could be considered personal data under European data protection law and argue that ‘enabling
users to deploy local personalization tools might balance power relations in relation to large firms
hoarding personal data’ [3, p. 5]. Winfield and Jirotka argue that creating strong ethical principles
is only the first step and that more should be done to assure implementation and accountability.
Because the real test for good governance of AI systems comes when the rubber hits the road, or
rather, the robot.
Harambam et al. explore the notion of ‘voice’, both as a way of allowing individuals to exert
more control over the algorithms in the news industry and to mitigate the pitfalls of attempts
at achieving algorithmic transparency [5]. The editors argued here, and in other pieces [26], that
it is important to ensure that there is equitable stakeholder representation when regulating AI.
Furthermore, there is a need for more non-US led initiatives like the Europe-based AI4People4
and the Council on Europe’s Expert Committee on AI and Human Rights.5 Even though it
4
See http://www.eismd.eu/ai4people/ Disclosure, one of the editors of this special issue is part of the AI4people initiative.
5
See https://www.coe.int/en/web/freedom-expression/msi-aut, Disclosure, one of the editors of this special issue is part
of the Council of Europe expert committee.
is important to have more Europe-led initiatives, we must also incorporate concerns from the
6
Global South. Marda’s article about India highlights why these voices are especially relevant [27].
Similarly, it is essential to go beyond the fairness, accountability and transparency rhetoric to
References
1. Floridi L. 2018 Soft ethics, the governance of the digital and the General Data Protection
Regulation. Phil. Trans. R. Soc. A 376, 20180081. (doi:10.1098/rsta.2018.0081)
2. Barocas S, Selbst AD. 2016 Big data’s disparate impact. Cal. L. Rev. 104, 671.
3. Veale M, Binns R, Edwards L. 2018 Algorithms that remember: model inversion attacks and
data protection law. Phil. Trans. R. Soc. A 376, 20180083. (doi:10.1098/rsta.2018.0083)
4. Eubanks V. 2018 Automating inequality: how high-tech tools profile, police, and punish the poor.
New York, NY: St. Martin’s Press.
5. Harambam J, Helberger N, van Hoboken J. 2018 Democratizing algorithmic news
recommenders: how to materialize voice in a technologically saturated media ecosystem. Phil.
Trans. R. Soc. A 376, 20180088. (doi:10.1098/rsta.2018.0088)
6. Floridi L, Taddeo M. 2016 What is data ethics? Phil. Trans. R. Soc. A 374, 20160360.
(doi:10.1098/rsta.2016.0360)
7. Ananny M. 2016 Toward an ethics of algorithms: convening, observation, probability, and
timeliness. Sci. Technol. Hum. Values 41, 93–117. (doi:10.1177/0162243915606523)
8. Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. 2016 The ethics of algorithms: mapping
the debate. Big Data Soc. 3, 2053951716679679. (doi:10.1177/2053951716679679)
9. Winfield AFT, Jirotka M. 2018 Ethical governance is essential to building trust in robotics and
artificial intelligence systems. Phil. Trans. R. Soc. A 376, 20180085. (doi:10.1098/rsta.2018.0085)
10. Taddeo M, Floridi L. 2018 How can AI be a force for good: an ethical framework will
7
help harness the potential of AI while keeping humans in control. Science 361, 751–752.
(doi:10.1126/science.aat5991)