Ethics in AI. Introduction To Special Issue

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Ethics and Information Technology (2018) 20:1–3

https://doi.org/10.1007/s10676-018-9450-z

EDITORIAL

Ethics in artificial intelligence: introduction to the special issue


Virginia Dignum1

Published online: 13 February 2018


© Springer Science+Business Media B.V., part of Springer Nature 2018

Recent developments in Artificial Intelligence (AI) have gen- from researchers as from practitioners, including the IEEE
erated a steep interest from media and general public. As AI initiative on Ethics of Autonomous Systems1, the Founda-
systems (e.g. robots, chatbots, avatars and other intelligent tion for Responsible Robotics2, and the Partnership on AI3
agents) are moving from being perceived as a tool to being amongst several others.
perceived as autonomous agents and team-mates, an impor- As the capabilities for autonomous decision making grow,
tant focus of research and development is understanding the perhaps the most important issue to consider is the need to
ethical impact of these systems. What does it mean for an rethink responsibility (Dignum 2017). Whatever their level
AI system to make a decision? What are the moral, societal of autonomy and social awareness and their ability to learn,
and legal consequences of their actions and decisions? Can AI systems are artefacts, constructed by people to fulfil
an AI system be held accountable for its actions? How can some goals. Theories, methods, algorithms are needed to
these systems be controlled once their learning capabilities integrate societal, legal and moral values into technological
bring them into states that are possibly only remotely linked developments in AI, at all stages of development (analysis,
to their initial, designed, setup? Should such autonomous design, construction, deployment and evaluation). These
innovation in commercial systems even be allowed, and how frameworks must deal both with the autonomic reasoning
should use and development be regulated? These and many of the machine about such issues that we consider to have
other related questions are currently the focus of much atten- ethical impact, but most importantly, we need frameworks to
tion. The way society and our systems will be able to deal guide design choices, to regulate the reaches of AI systems,
with these questions will for a large part determine our level to ensure proper data stewardship, and to help individuals
of trust, and ultimately, the impact of AI in society, and the determine their own involvement.
existence of AI. Values are dependent on the socio-cultural context (Turiel
Contrary to the frightening images of a dystopic future 2002), and are often only implicit in deliberation processes,
in media and popular fiction, where AI systems dominate the which means that methodologies are needed to elicit the
world and is mostly concerned with warfare, AI is already values held by all the stakeholders, and to make these
changing our daily lives mostly in ways that improve human explicit can lead to better understanding and trust on artifi-
health, safety, and productivity (Stone et al. 2016). This is cial autonomous systems. That is, AI reasoning should be
the case in domain such as transportation; service robots; able to take into account societal values, moral and ethical
health-care; education; public safety and security; and enter- considerations; weigh the respective priorities of values held
tainment. Nevertheless, and in order to ensure that those by different stakeholders in various multicultural contexts;
dystopic futures do not become reality, these systems must explain its reasoning; and guarantee transparency. Respon-
be introduced in ways that build trust and understanding, and sible Artificial Intelligence is about human responsibility for
respect human and civil rights. The need for ethical consid- the development of intelligent systems along fundamental
erations in the development of intelligent interactive systems human principles and values, to ensure human flourishing
is becoming one of the main influential areas of research and wellbeing in a sustainable world. In fact, Responsible AI
in the last few years, and has led to several initiatives both is more than the ticking of some ethical ‘boxes’ in a report,
or the development of some add-on features, or switch-off
buttons in AI systems. Rather, responsibility is fundamental
* Virginia Dignum
[email protected] 1
http://ethic​sinac​tion.ieee.org
2
1
Delft Design for Values Institute, Delft University http://respo​nsibl​erobo​tics.org/
3
of Technology, Jaffalaan 5, 2628BX Delft, The Netherlands http://www.partn​ershi​ponai​.org/

13
Vol.:(0123456789)
2 V. Dignum

to autonomy and should be one of the core stances underly- position ther “We are therefore obliged not to build AI we
ing AI research. are obliged to”.
The above considerations show that ethics and AI are The second set of papers, focus on the issue of Ethics
related at several levels: by Design. I.e. assuming that designers are given a clear,
consistent and share set of ethical principles, these three
– Ethics by Design: the technical/algorithmic integration papers propose different aspects of its implementation in
of ethical reasoning capabilities as part of the behaviour AI systems, such that the system is able either to make eth-
of artificial autonomous system; ically-based decisions itself, or to alert users and/or moni-
– Ethics in Design: the regulatory and engineering methods tors to potential deviations of behaviour from such ethical
that support the analysis and evaluation of the ethical principles.
implications of AI systems as these integrate or replace Peter Vamplew et al. focus on the need to ensure that
traditional social structures; the behaviour of AI systems is beneficial to humanity. In
– Ethics for Design: the codes of conduct, standards and their paper “Human-Aligned Artificial Intelligence is a
certification processes that ensure the integrity of devel- Multiobjective Problem”, they discuss the requirement
opers and users as they research, design, construct, for ethical, legal and safety-based frameworks to consider
employ and manage artificial intelligent systems. multiple potentially conflicting factors. They demonstrate
that these alignment frameworks can be represented as util-
The papers on this special issue present different views on ity functions, but that the widely used Maximum Expected
the relation between ethics and AI. The first two papers, Utility (MEU) paradigm provides insufficient support for
those by Rahwan, and by Bryson, can be classified mostly in such multiobjective decision-making. They then propose a
the area of Ethics in Design and for parts in the area of Eth- Multiobjective Maximum Expected Utility paradigm based
ics for Designers, whereas the last three papers, by Vamplew on the combination of vector utilities and non-linear action-
et al., Bonnemains et al., and Arnold and Scheutz propose selection that can overcome many of the issues which limit
different approaches to Ethics by Design. MEU’s effectiveness in implementing values-aligned artifi-
The paper by Iyad Rahwan, “Society-in-the-Loop: Pro- cial intelligence. They further examine existing approaches
gramming the Algorithmic Social Contract”, focuses on the to multiobjective artificial intelligence, and identify how
regulatory and governance mechanisms for autonomous these can contribute to the development of human-aligned
machines. The vision of the paper is that the algorithms intelligent agents.
governing our lives must be provably transparent, fair, and In “Embedded Ethics: Some technical and ethical chal-
accountable along the values shared by stakeholders. The lenges”, Vincent Bonnemains, Claire Saurel and Cath-
paper describes a conceptual framework to program, debug erine Tessier focus on a formal approach to what can be
and maintain an algorithmic social contract, a pact between considered as artificial ethical reasoning by an observer.
various human stakeholders, mediated by machines, here- The approach includes formal tools to describe a situa-
with setting the society-in-the-loop (SITL) approach for tion and models of ethical principles that are designed to
identifying and negotiating the values of various stakehold- automatically compute a judgement, and to explain why a
ers affected by AI systems, as basis for monitoring compli- given decision is ethically, or not, acceptable. Based on a
ance of the system with the social contract. though experiment involving the drone dilemma, the paper
In “Patiency Is Not a Virtue: The Design of Intelligent illustrates the use of this approach to model three ethical
Systems and Systems of Ethics”, Joanna Bryson contends frameworks—utilitarian ethics, deontological ethics and the
that the place of AI in society is a matter of normative, rather Doctrine of Double effect—and evaluate their responses to
than descriptive ethics. In the view exposed in this paper, the this ethical dilemma.
question of whether AI or robots can, or should, be afforded Finally, the paper “The Big Red Button Is Too Late: An
moral agency or patiency is not one amenable either to dis- Alternative Model for the Ethical Evaluation of AI Sys-
covery or simple reasoning, because we as societies con- tems”, by Thomas Arnold and Matthias Scheutz presents
stantly reconstruct our artefacts, including our ethical sys- existing proposals for an emergency button in AI systems,
tems. Taking a functionalist assumption, that ethics is the and discuss the viability of emergency stop mechanisms
set of behaviour that maintains a society, the paper explores that enable human operators to interrupt or divert a sys-
the basis of sociality and autonomy to explain moral intui- tem while preventing the system from learning that such an
tions with respect to AI systems. This effort leads to the intervention is a threat to its own existence. Given that such
conclusion that while constructing AI as either moral agent approaches concentrate on minimizing effects after the sys-
or patient is possible, neither is desirable, given the unlikeli- tem has already gone astray, the paper proposes an alterna-
hood of constructing a suitable coherent ethics of AI moral tive based on an ongoing self-evaluation and testing an inte-
subjectivity. The paper presents solid arguments to Bryson’s gral part of a system’s operation, to prevent chaos and risk

13
Ethics in artificial intelligence: introduction to the special issue 3

before they start and diagnose how the system is in error. Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G.,
The paper further argues for a scenario-generation mecha- Hirschberg, J., Kalyanakrishnan, S., Kamar, E., Kraus, S., Leyton-
Brown, K., Parkes, D., Press, W., Saxenian, A., Shah, J., Tambe,
nism that enables to test a system’s decisions in a simulated M., Teller, A. (2016). Artificial Intelligence and Life in 2030:
world, rather than the real world, which they conclude to be One Hundred Year Study on Artificial Intelligence: Report of the
far more effective, responsive, and vigilant toward a system’s 2015–2016 Study Panel.
learning and action in the world than an emergency button Turiel, E. (2002). The culture of morality: Social development, context,
and conflict. Cambridge: Cambridge University Press.
which one might not get to push in time.
Together, these papers represent current state of the art
in Ethics in Artificial Intelligence, and contribute to a better
understanding of the many challenges for this topic.

References
Dignum, V. (2017). Responsible autonomy. In Proceedings of the
Twenty-Sixth International Joint Conference on Artificial Intel-
ligence (IJCAI’2017), pp. 4698–4704

13

You might also like