Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Artificial Intelligence: Ethical, social, and security impacts for the present and the future, Second edition
Artificial Intelligence: Ethical, social, and security impacts for the present and the future, Second edition
Artificial Intelligence: Ethical, social, and security impacts for the present and the future, Second edition
Ebook427 pages4 hours

Artificial Intelligence: Ethical, social, and security impacts for the present and the future, Second edition

Rating: 0 out of 5 stars

()

Read preview

About this ebook

A global perspective on AI

The rise of AI and super-intelligent AI raises ethical issues. AI is the power behind Google’s search engine, enables social media sites to serve up targeted advertising, gives Alexa and Siri their voices, and enables OpenAI’s ChatGPT to produce written responses from just a few prompts by the user. It is also the technology enabling self-driving vehicles, predictive policing, and autonomous weapons that can kill without direct human intervention. All of these bring up complex ethical issues that are still unresolved and will continue to be the subject of ongoing debate.

 

This book:

  • Explores the complex topic of AI ethics in a cross-functional way;
  • Enables understanding of the associated ethical challenges of AI technologies;
  • Provides an up-to-date overview of the potential positive and negative outcomes of AI implementations; and
  • Has been updated to reflect the ethical challenges of AI in 2024 and beyond, and the moral imperative of navigating this new terrain.
This book presents a concrete approach to identifying appropriate ethical principles in AI solutions
LanguageEnglish
Publisheritgovernance
Release dateAug 8, 2024
ISBN9781787785144
Artificial Intelligence: Ethical, social, and security impacts for the present and the future, Second edition
Author

Julie Mehan

Dr Julie Mehan is a Principal Analyst for a strategic consulting firm in the State of Virginia. She has been a career Government Service employee, a strategic consultant, and an entrepreneur.

Read more from Julie Mehan

Related to Artificial Intelligence

Related ebooks

Law For You

View More

Related articles

Reviews for Artificial Intelligence

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Artificial Intelligence - Julie Mehan

    INTRODUCTION

    Let’s start by saying that this book is not a guide on how to develop AI. There are plenty of those – and plenty of YouTube videos providing introductions to machine learning (ML) and AI. Rather, the intent is to provide an understanding of AI’s foundations and its actual and potential social and ethical implications – though by no means ALL of them, as we are still in the discovery phase. Although it is not technically-focused, this book can provide essential reading for engineers, developers, and statisticians in the AI field, as well as computer scientists, educators, students, and organizations with the goal of enhancing their understanding of how AI can and is changing the world we live in.

    An important note: throughout this book, the term AI will be used as an overarching concept encompassing many of the areas and sub-areas of AI, ML, and deep learning (DL). So, readers, allow some latitude for a certain degree of inaccuracy in using the overarching AI acronym in reference to all of its permutations.

    It is essential to begin at the outset to define and describe AI, all the while bearing in mind that there is no one single accepted definition. This is partly because intelligence itself is difficult to define. As Massachusetts Institute of Technology (MIT) Professor Max Tegmark pointed out, There’s no agreement on what intelligence is even among intelligent intelligence researchers.

    In fact, few concepts are less clearly defined as AI. The term AI itself is polysemous – having multiple meanings and interpretations. In fact, it appears that there are as many perceptions and definitions of AI as there are proliferating applications. Although there are multiple definitions of AI, let’s look at this really simple one: AI is intelligence exhibited by machines, where a machine can learn from information (data) and then use that learned knowledge to do something.

    According to a 2017 Rand Study,

    algorithms and artificial intelligence (AI) agents (or, jointly, artificial agents) influence many aspects of our lives: the news articles we read, the movies we watch, the people we spend time with, our access to credit, and even the investment of our capital. We have empowered them to make decisions and take actions on our behalf in these and many other domains because of the efficiency and speed gains they afford.

    AI faults in social media may have only a minor impact, such as pairing someone with an incompatible date. But a misbehaving AI used in defense, infrastructure, or finance could represent a potentially high and global risk. A misbehaving algorithm refers to an AI whose processing results lead to incorrect, prejudiced, or simply dangerous consequences. The market’s Flash Crash of 2010⁷ is a painful example of just how vulnerable our reliance on AI can make us. The recent evolutions in AI, especially in generative AI, are showing us just how great the impact can be on our lives. Melvin Kranzberg⁸ wrote as early as 1986 that Many of our technology-related problems arise because of the unforeseen consequences where apparently benign technologies are employed on a massive scale. And this is becoming the case with generative AI. As with other technologies, a messy period of behavioral, societal, and legislative adaptation will certainly have to follow.

    As an international community, we need to address the more existential concerns. For example, where will continued innovation in AI ultimately lead us? Will today’s more narrow applications of AI make way for fully intelligent AI? Will the result be a continuous acceleration of innovation resulting in exponential growth in which super-intelligent AI will develop solutions for humanity’s problems, or will future AI intentionally or unintentionally destroy humanity – or even more likely, be distorted and abused by humanity? These are the immediate and long-term concerns arising from the increased development and deployment of AI in so many facets of our society.

    But there is a counter to this argument that runs central to this book, and it could not be better expressed than in the words of Kevin Kelly, founder of Wired magazine:

    But we haven’t just been redefining what we mean by AI – we’ve been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents that we once thought were unique to humans, we’ve had to change our minds about what sets us apart … In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science – although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.

    ___________________________

    ⁵ Tegmark, M. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. London: Penguin Books.

    ⁶ Osonde A. Osoba, and William Welser IV. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation.

    ⁷ On May 6, 2010, Wall Street experienced its worst stock plunge in several decades, wiping almost a trillion dollars in wealth out in a mere 20 minutes. Other so-called flash crashes have occurred since, and most were a result of a misbehaving algorithm.

    ⁸ From the Six Laws of Technology, written in 1986 by Melvin Kranzberg, a professor of the History of Technology at Georgia Tech. Published in July 1986 in Technology and Culture, Vol. 27, No. 3. Available at https://www.jstor.org/stable/i356080

    .

    ⁹ Kelly, Kevin. (October 27, 2014). The Three Breakthroughs That Have Finally Unleashed AI on the World. Wired magazine online. Available at www.wired.com/2014/10/future-of-artificial-intelligence/

    .

    CHAPTER 1: AI DEFINED AND COMMON DEPICTIONS OF AI – IS IT A BENEVOLENT FORCE FOR HUMANITY OR AN EXISTENTIAL THREAT?

    By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.

    Eliezer Yudkowsky¹⁰

    OK! AI will destroy humans!

    This statement sums up some of the common (mis-) perceptions held by humans about AI. In truth, we are at no near-term (or even long-term) risk of being destroyed by intelligent machines.

    Elon Musk, the noted tech tycoon, begs to differ, with his claim that AI is a fundamental risk for the existence of human civilization.¹¹ Musk made this statement based on his observations that the development and deployment of AI is far outpacing our ability to manage it safely.

    Narratives about AI play a key role in the communication and shaping of ideas about AI. Both fictional and non-fictional narratives have real-world effects. In many cases, public knowledge about the AI and its associated technology is limited. Perceptions and expectations are therefore usually informed by personal experiences using existing applications, by film and books, and by the voices of prominent individuals talking about the future. This informational disconnect between the popular narratives and the reality of the technology can have potentially significant negative consequences.

    Narratives that are focused on utopian extremes could create unrealistic expectations that the technology is not yet able to meet. Other narratives focused on the fear of AI may overshadow some of the real challenges facing us today. With real challenges, such as wealth distribution, privacy, and the future of work facing us, it’s important for public and legislative debate to be founded on a better understanding of AI. Bad regulation is another potential consequence resulting in misleading narratives and understanding, and influencing policymakers: they either respond to these narratives because these are the ones that resonate with the public, or because they are themselves influenced by them. AI may develop too slowly and not meet expectations, or it may evolve so fast that it is not aligned with legal, social, ethical, and cultural values.

    A very brief history of AI – and perceptions of AI

    Whether AI is a potential threat or not may be debatable, but before entering the debate, let’s look at the history of AI. AI is not a new term. In fact, it was first introduced in 1956 by John McCarty, an assistant Professor at Dartmouth College, at the Dartmouth Summer Research Project. His definition of AI was the science and making of intelligent machines or getting machines to work and behave like humans.

    But the concept of AI was not first conceived with the term in 1956. Although it is not surprising that AI grew rapidly post-computers, what is surprising is how many people thought about AI-like capabilities hundreds of years before there was even a word to describe what they were thinking about. In fact, something similar to AI can be found as far back as Greek mythology and Talos. Talos was a giant bronze automaton warrior said to have been made by Hephaestus to protect Europa, Zeus’s consort, from pirates and invaders who might want to kidnap her.

    Between the fifth and fourteenth centuries, or the Dark Ages, there were a number of mathematicians, theologians, philosophers, professors, and authors who contemplated mechanical techniques, calculating machines, and numeral systems that ultimately led to the idea that mechanized human thought might be possible in non-human beings.

    Although never realized, Leonardo da Vinci designed an automaton (a mechanical knight) in 1495.

    Jonathan Swift’s novel Gulliver’s Travels from the 1700s talked about an apparatus it called the engine. This device’s supposed purpose was to improve knowledge and mechanical operations to a point where even the least talented person would seem to be skilled – all with the assistance and knowledge of a non-human mind.

    Inspired by engineering and evolution, Samuel Butler wrote an essay in 1863 entitled Darwin Among the Machines wherein he predicted that intelligent machines would come to dominate:

    … the machines are gaining ground upon us; day by day we are becoming more subservient to them […] that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.¹²

    Fast forward to the 1900s, where concepts related to AI took off at full tilt and there was the first use of the term robot. In 1921, Karel Čapek, a Czech playwright, published a play entitled Rossum’s Universal Robots (English translation), which featured factory-made artificial people – the first known reference to the word.

    One of the first examples in film was Maria, the Maschinenmensch or machine-human, in the Fritz Lang directed German movie Metropolis made in 1927. Set in a dystopian future, the gynoid¹³ Maria was designed to resurrect Hel, the deceased love of the inventor, Rotwang, but Maria evolved to seduce, corrupt, and destroy. In the end, her fate was to be destroyed by fire. Many claims have been made that this movie spawned the trend of futurism in the cinema. Even if we watch it today in its 2011 restoration, it is uncanny to see how many shadows of cinema yet to come it already contains.

    Figure 1-1: Gynoid Maria from the movie Metropolis¹⁴

    In 1950, Alan Turing published "Computing Machinery and Intelligence," which proposed the idea of The Imitation Game – this posed the question of whether machines could actually think. It later became known as The Turing Test, a way of measuring machine (artificial) intelligence. This test became an important component in the philosophy of AI, which addresses intelligence, consciousness, and ability in machines.

    In his novel, Dune, published in 1965, Frank Herbert describes a society in which intelligent machines are so dangerous that they are banned by the commandment Thou shalt not make a machine in the likeness of a human mind.¹⁵

    Fast forward to 1969, and the birth of Shakey – the first general purpose mobile robot. Developed at the Stanford Research Institute (SRI) from 1966 to 1972, Shakey was the first mobile robot to reason about its actions. Its playground was a series of rooms with blocks and ramps. Although not a practical tool, it led to advances in AI techniques, including visual analysis, route finding, and object manipulation. The problems Shakey faced were simple and only required basic capability, but this led to the researchers developing a sophisticated software search algorithm called A* that would also work for more complex environments. Today, A* is used in applications, such as understanding written text, figuring out driving directions, and playing computer games.

    1997 saw the development of Deep Blue by IBM, a chess-playing computer that became the first system to play chess against the reigning world champion, Gary Kasparov, and win. This was a huge milestone in the development of AI and the classic plot we’ve seen so often of man versus machine. Deep Blue was programmed to solve the complex, strategic problems presented in the game of chess, and it enabled researchers to explore and understand the limits of massively parallel processing. It gave developers insight into ways they could design a computer to tackle complex problems in other fields, using deep knowledge to analyze a higher number of possible solutions. The architecture used in Deep Blue has been applied to financial modeling, including marketplace trends and risk analysis; data mining – uncovering hidden relationships and patterns in large databases; and molecular dynamics, a valuable tool for helping to discover and develop new drugs.

    From 2005 onwards, AI has shown enormous progress and increasing pervasiveness in our everyday lives. From the first rudimentary concepts of AI in 1956, today we have speech recognition, smart homes, autonomous vehicles (AVs), and so much more. What we are seeing here is a real compression of time in terms of AI development. But why? Blame it on the increase in data or big data. Although we may not see this exact term, it hasn’t disappeared. In fact, data has just got bigger. This increase in data has left us with a critical question: Now what? As in: We’ve got all this stuff (that’s the technical term for it!) and it just keeps accumulating – so what do we do with it? AI has become the set of tools that can help an organization aggregate and analyze data more quickly and efficiently. Big data and AI are merging into a synergistic relationship, where AI is useless without data, and mastering today’s ever-increasing amount of data is insurmountable without AI.

    So, if we have really entered the age of AI, why doesn’t our world look more like The Jetsons, with autonomous flying cars, jetpacks, and intelligent robotic housemaids? Oh, and in case you aren’t old enough to be familiar with The Jetsons – well, it was a 1960s TV cartoon series that became the single most important piece of twentieth-century futurism. And though the series was just a Saturday morning cartoon, it was based on very real expectations for the future.

    In order to understand where AI is today and where it might be tomorrow, it’s critical to know exactly what AI is, and, more importantly, what it is not.

    What exactly is AI?

    In many cases, AI has been perceived as robots doing some form of physical work or processing, but in reality, we are surrounded by AI doing things that we take for granted. We are using AI every time we do a Google search or look at our Facebook feeds, as we ask Alexa to order a pizza, or browse Netflix movie selections.

    There is, however, no straightforward, agreed-upon definition of AI. It is perhaps best understood as a branch of computer science that endeavors to replicate or simulate human intelligence in a machine, so machines can efficiently – or even more efficiently – perform tasks that typically require human intelligence. Some programmable functions of AI systems include planning, learning, reasoning, problem solving, and decision-making.

    In effect, AI is multidisciplinary, incorporating human social science, computing science, and systems neuroscience,¹⁶ each of which has a number of sub-disciplines.¹⁷

    Figure 1-2: AI is multidisciplinary¹⁸

    Computer scientists and programmers view AI as algorithms for making good predictions. Unlike statisticians, they are not too interested in how we got the data or in models as representations of some underlying truth. For them, AI is black boxes making predictions.

    Statisticians understand that it matters how data is collected, that samples can be biased, that rows of data need not be independent, that measurements can be censored, or truncated. In reality, the majority of AI is just applied statistics in disguise. Many techniques and algorithms used in AI are either fully borrowed from or heavily rely on the theory from statistics.

    And then there’s mathematics. The topics at the heart of mathematical analysis – continuity and differentiability – are also what is at the foundation of most AI/ML algorithms.

    All AI systems – real and hypothetical – fall into one of three types:

    1. Artificial narrow intelligence (ANI) , which has a narrow range of abilities;

    2. Artificial general intelligence (AGI) , which is on par with human capabilities; or

    3. Artificial superintelligence (ASI) , which is more capable than a human.

    ANI is also known as weak AI and involves applying AI only to very specific and defined tasks, i.e. facial recognition, speech recognition/voice assistants. These capabilities may seem intelligent, however, they operate under a narrow set of constraints and limitations. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behavior based on a narrow and specified range of parameters and contexts. Examples of narrow AI include:

    •Siri by Apple, Alexa by Amazon, Cortana by Microsoft, and other virtual assistants;

    •IBM’s Watson;

    •Image/facial recognition software;

    •Disease mapping and prediction tools;

    •Manufacturing and drone robots; and

    •Email spam filters/social media monitoring tools for dangerous content.

    AGI is also referred to as strong or deep AI, or intelligence that can mimic human intelligence and/or behaviors, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is virtually indistinguishable from that of a human in any given situation. Although there has been considerable progress, AI researchers and scientists have not yet been able to achieve a fully-functional strong AI. To succeed would require making machines conscious, and programming a full set of cognitive abilities. Machines would have to take experiential learning to the next level, not just improving efficiency on singular tasks, but gaining the ability to apply the experiential knowledge to a wide and varying range of different problems. The physicist Stephen Hawking stated that there is the potential for strong AI to … take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, could not compete, and would be superseded.¹⁹

    One of the most frightening examples of AGI is HAL (Heuristically programmed ALgorithmic computer) in 2001: A Space Odyssey. HAL 9000, the sentient computer at the heart of 2001, remains one of the most memorable characters in the film. Faced with the prospect of disconnection after an internal malfunction, HAL eventually turns on the Discovery 1 astronaut crew, killing one, before being manually shut down by the other crew member. HAL continues to represent a common fear of future AI, in which man-made technology could turn on its creators as it evolves in knowledge and consciousness.

    ASI is still only a hypothetical capability. It is AI that doesn’t just mimic or understand human intelligence and behavior; ASI represents the point machines become self-aware and may even surpass the capacity of human intelligence and ability. ASI means that AI has evolved to be so similar to a human’s emotions and experiences that it doesn’t just understand them, it even develops emotions, needs, beliefs, and desires of its own.

    A possible example of ASI is the android Data who appeared in the TV show, Star Trek: The Next Generation. In one episode, The Measure of a Man, Data becomes an object of study, threatened with his memory being removed and then being deactivated and disassembled in order to learn how to create more Data-like androids. The scientist argues that Data is purely a machine; Data claims that he will lose himself, as his identity consists of a complex set of responses to the things he has experienced and learned over time, making him unique. And if other androids were created, they would be different from him for precisely this reason. The possibility of new androids does not make him worry about his own identity; rather, it is the possibility that he will be reverted to something like a blank slate, which would then no longer be him. In the end, it came down to the question of What is human? Can humanity be defined by something like sentience, self-awareness, or the capacity for self-determination (autonomy), and how are these determined? It appears that these questions could not even be fully answered for humans, much less for Data, the android.

    As AI continues to evolve, however, these may become the most salient questions.

    Before we talk any further about AI, it’s critical to understand that AI is an overarching term. People tend to think that AI, ML, and DL are the same things, since they have common applications. These distinctions are important – but this book will continue to us AI as the primary term that reaches across all of these subsets.

    Figure 1-3: Definitions of AI, ML, and DL

    Let’s take a deeper look at each of these. A machine is said to have AI if it can interpret data, potentially learn from the data, and use that knowledge to achieve specific goals, or perform specific tasks. It is the process of making machines smart, using algorithms²⁰ that allow computers to solve problems that used to be solved only by humans.

    AI technologies are brilliant today at analyzing vast amounts of data to learn to complete a particular task or set of tasks – ML. The main goal of ML is to develop machines with the ability to learn entirely or almost entirely by themselves, without the need for anyone to perfect their algorithms. The objective is to be so much like the human mind that these machines can independently improve their own processes and perform the tasks that have been entrusted to them with an ever-greater degree of precision. However, in order for ML to function, ideally, humans must supply the machine with information, either through files loaded with a multitude of data, or by enabling the machine to gather data through its own observations and to even interact with the world outside itself.

    AI learning styles

    AI has a variety of learning styles and approaches that enable its ability to solve problems or execute desired tasks. These learning styles fall mostly into the category of ML or DL.

    Figure 1-4: AI learning styles

    ML

    ML is the core to AI, because it has allowed the machines to advance in capability from relatively simple tasks to more complex ones. A lot of the present anticipation surrounding the possibility of AI is derived from the enormous promise of ML. ML encompasses supervised learning, unsupervised learning, and reinforcement learning.

    Supervised learning

    Supervised learning feeds the machines with existing information so that they have specific, initial examples and can expand their knowledge over time. It is usually done by means of labels, meaning that when we program the machines, we pass them properly labeled elements so that later they can continue labeling new elements without the need for human intervention. For example, we can pass the machine pictures of a car, then we tell it that each of these pictures represents a car, and how we want it to be interpreted. Using these specific examples, the machine generates its own supply of knowledge so that it can continue to assign labels when it recognizes a car. Using this type of ML, however, the machines are not limited to being trained from images, but can use other data types. For example, if the machine is fed with sounds or handwriting data sets, it can learn to recognize voices or detect written patterns and associate them with a particular person. The capability evolves entirely from the initial data that is supplied to the machine.

    Figure 1-5: Dog vs. not a dog

    As humans, we consume a lot of information, but often don’t notice these data points. When we see a photo of a dog,²¹ for example, we instantly know what the animal is based on our prior experience. But the machine can only recognize an image as a dog if it has been fed the examples and told that these images represent a dog.

    Unsupervised learning

    In unsupervised learning, the developers do not provide the machine with any kind of previously labeled information about what it should recognize, so the machine does not have an existing knowledge base. Rather, it

    Enjoying the preview?
    Page 1 of 1