Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Operating AI: Bridging the Gap Between Technology and Business
Operating AI: Bridging the Gap Between Technology and Business
Operating AI: Bridging the Gap Between Technology and Business
Ebook461 pages5 hours

Operating AI: Bridging the Gap Between Technology and Business

Rating: 0 out of 5 stars

()

Read preview

About this ebook

A holistic and real-world approach to operationalizing artificial intelligence in your company

In Operating AI, Director of Technology and Architecture at Ericsson AB, Ulrika Jägare, delivers an eye-opening new discussion of how to introduce your organization to artificial intelligence by balancing data engineering, model development, and AI operations. You'll learn the importance of embracing an AI operational mindset to successfully operate AI and lead AI initiatives through the entire lifecycle, including key areas such as; data mesh, data fabric, aspects of security, data privacy, data rights and IPR related to data and AI models.

In the book, you’ll also discover:

  • How to reduce the risk of entering bias in our artificial intelligence solutions and how to approach explainable AI (XAI)
  • The importance of efficient and reproduceable data pipelines, including how to manage your company's data
  • An operational perspective on the development of AI models using the MLOps (Machine Learning Operations) approach, including how to deploy, run and monitor models and ML pipelines in production using CI/CD/CT techniques, that generates value in the real world
  • Key competences and toolsets in AI development, deployment and operations
  • What to consider when operating different types of AI business models

With a strong emphasis on deployment and operations of trustworthy and reliable AI solutions that operate well in the real world—and not just the lab—Operating AI is a must-read for business leaders looking for ways to operationalize an AI business model that actually makes money, from the concept phase to running in a live production environment.

LanguageEnglish
PublisherWiley
Release dateApr 19, 2022
ISBN9781119833215
Operating AI: Bridging the Gap Between Technology and Business

Related to Operating AI

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Operating AI

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Operating AI - Ulrika Jägare

    Introduction

    Artificial intelligence (AI) plays a critical role in optimizing the value gained from digital transformation. Across different business segments, companies seek to leverage new technologies for increased revenue or lower cost. But AI is much more than an accelerator for taking the digital transformation journey to another level and making it possible for teams to work smarter, do things faster, and turn previously impossible tasks into routine.

    Artificial intelligence has started to be seen as a key business enabler across more and more industries. Corporations are starting to view AI as a technology for future-proofing their business way beyond organizational efficiency. It's a revolutionary approach where AI becomes the foundation of the commercial portfolio, whether it's products, services, or some type of as a service setup. By embracing the full potential of AI, every company and organization in some sense becomes a technology company, whether or not that is the goal. But are companies in general ready for this massive transformation?

    I would argue that most companies are not ready for this massive transformation, but it's important to remember that neither are their customers. Remember that major technology shifts like the one that AI imposes hold a lot of promise but require a fundamental transformation to take place in order to gain the expected return on investment (ROI). This fundamental shift will not happen overnight and will definitely not proceed in a synchronized manner across different markets and business segments, nor across the public sector with all its various service functions.

    However, it's worth noting that the COVID-19 pandemic has accelerated the need for, and understanding of, the benefits of a fully digitalized workplace and society. However, keep in mind that just because the digitalization journey is speeding up, that doesn't necessarily mean that adding AI capabilities will be the next natural step to take.

    It's not as easy as it may seem to effectively deploy and leverage AI in the enterprise. To be successful, you can't only focus on the technical pieces—you need to also address aspects such as strategy, people, and ways of working as well as how your AI solution is intended to run in production. This is crucial to break down barriers between AI in development and AI in production, and to quickly and seamlessly be able to move AI models and operate increasing numbers of models on a continuous basis in a live setting.

    There is no easy fix for this, but by learning how to balance your AI investment while keeping an operational mind-set throughout, you will be more likely to succeed.

    This book is centered on the fact that operating AI is not the same as operating software. That is not just a statement, but a principle that has many implications for what it means to embrace AI in your company or organization. By reading this book, you will gain insights on how to approach AI in your enterprise with operations in mind, and by doing so you are much more likely to succeed with your objectives. An operational approach should be taken directly from the start when you build your AI foundation with reproducible model pipelines. In the development phase, consider potential operational factors such as modeling the target environment or the actual use case, and you will be better positioned to build a solution that will meet its objective when it's running in live operations.

    Another important aspect in this book involves truly addressing the data perspective as part of your strategic investments in AI. Remember that without the data, your AI solution cannot run. Understanding and caring for your data is vital, as well as making sure you have the data rights needed, which can sometimes be the hardest thing to manage as part of an operational setting. What you don't want to do is find out too late that the data you need isn't accessible or is owned by another party, or perhaps that the data pipeline you have invested in will not scale in production.

    This book will also focus on how to successfully deploy your models as well as operate your AI solution in live environments. You will learn how different model target environments can influence aspects through the whole AI life cycle, not only which deployment options you have but which data you need to train your model on, which AI technique you will benefit most from using, how to scale your solution over time, and how and why you need to monitor and maintain your model when it's operating in production.

    Finally, it's important to remember that AI is all about trust. In order for a company to rely on the AI solution to take over parts of its operations, make decisions, and let the AI system take action based on identified insights, both management and employees must trust the AI solution enough. To ensure that trust, from the start you need to think about the operational context, legal rights, and transparency and reliability aspects. This is especially valid for commercial usage of AI. In order for your customers to trust your AI-based products and/or services, you must be able to explain how your AI solution works and what is actually going on. The less your customers understand of how an AI-based solution works, the more insecure they will feel about trusting it. Customers hate to buy a black box solution. Although more complex AI techniques like deep learning can be hard to explain even for the data scientists who are building the solutions, there are ways to work with explainable AI (XAI), which will be further explored in this book.

    Since the main objective of your AI investment is to realize a business value, internal or commercial, it's fundamental to understand what can be expected from your AI investment. Most companies understand the difficulties involved in reaching their objectives, but they may not fully grasp how to best navigate these challenges given a specific industry or for a specific business model. The book helps you connect these pieces, apply an operational mind-set to the business perspective, and set you on the path to success.

    What Does This Book Cover?

    This book covers the following topics:

    Chapter 1: Balancing the AI Investment   There is no simple answer to how to succeed with your AI investment, but there are some fundamental aspects that should be driving your objectives and realization plans, and that includes a balanced approach to AI. In this chapter you will find out what that means and why it is important for your business. The chapter will start by defining AI and by sorting out what AI is in relation to other related concepts such as machine learning (ML), automation, and robotics, just to name a few. This chapter will also address why you need to put more effort into making your AI model operational than you put into developing your AI model and how to embrace an operational mind-set for AI.

    Chapter 2: Data Engineering Focused on AI   Treating data as a valuable business asset should be the main priority in any company, and it's the key to staying on top of what is going on in your company. Leveraging data will help you understand what is not working and why, as well as enable you to see what is coming. This chapter will present a structured way for you to get to know your data and focuses on the importance of working with production. Furthermore, you will learn which data quality metrics are important and how to scale your data to succeed with your AI investment, as well as key competences in data engineering.

    Chapter 3: Embracing MLOps   In ML development the problem is seldom to technically develop, train, or implement ML models; instead, the main problem is mostly related to poor communication and lack of efficient cross-functional team collaboration. It might sound like an easy task to correct, but the fact remains that most AI projects do not make it to production due to this communication gap between the data scientists and the business. This chapter will introduce the most successful approach to tackle these problems: MLOps practices. You will learn that shifting the focus from building individual ML models to building ML pipelines is a game-changer. The chapter will also explain the importance of adopting a continuous learning approach. This chapter will also describe how to approach your AI/ML functional technology stack and ensure you have the right competences and toolsets for successful MLOps practices.

    Chapter 4: Deployment with AI Operations in Mind   It's important to remember that it's not until you deploy your models in a production setting that the value of AI can fully be realized. However, moving your models from the lab to production is far from an easy task. Successful model deployment is about a lot more than just running your model in another execution environment. When deploying AI models in production, you need to consider various areas spanning from legal rights and data access to managing retraining and redeployment of models in a live production setting. In this chapter you will learn how to handle model serving in practice and the role of the ML inference pipeline in this process. Furthermore, key success factors for industrializing AI will be outlined, as well as why it's equally important to focus attention on the cultural shift that needs to happen.

    Chapter 5: Operating AI Is Different from Operating Software   Observing and monitoring AI models in production is often an overlooked part of the ML life cycle, almost like an afterthought, when it should be seen as critical to a model's viability in the post-deployment phase. Because AI is built on continuous learning principles, it requires more operational support than traditional software. The feedback loop becomes fundamental, along with highly automated monitoring of model performance and data quality. This chapter will address the cornerstones of AI model monitoring and model scoring in production. Retraining in production using continuous training (CT) will be addressed, as well as how to efficiently handle model performance issues.

    Finally, the chapter includes reflections on why different model monitoring is needed for different stakeholders, as well as considerations regarding model monitoring toolsets.

    Chapter 6: AI Is All About Trust   Despite substantial investments in governance, many organizations still lack visibility into the risks that AI models pose and what, if any, steps have been taken to mitigate them. This is a serious problem, given the increasingly critical role AI models now play in supporting daily decision making. But there are also major reputational, operational, and financial damage that companies face when AI systems malfunction, expose personal data, or contain inherent biases. This chapter will address how to anonymize data and what that means for businesses. To gain trust in an AI solution, you also need to reduce the impact of bias, as well as be able to explain how a model arrived at a certain decision, which is further explored in this chapter. Finally, legal aspects on data rights and AI model rights are explored, including operational governance considerations related to data and AI.

    Chapter 7: Achieving Business Value from AI   As businesses from every sector start ramping up their efforts to integrate AI into their operational model, companies must invest immediately in AI solutions or risk falling behind. The question addressed in this chapter is how to do that successfully. The chapter starts by explaining the challenge of leveraging value from AI and then describes the key aspects of achieving and measuring successful AI business realization. The chapter concludes by explaining the business operational differences for various AI business models.

    How to Contact the Publisher

    If you believe you've found a mistake in this book, please bring it to our attention. At John Wiley & Sons, we understand how important it is to provide our customers with accurate content, but even with our best efforts an error may occur.

    In order to submit your possible errata, please email it to our Customer Service Team at [email protected] with the subject line Possible Book Errata Submission.

    How to Contact the Author

    We appreciate your input and questions about this book! DM me on LinkedIn @ulrika-jagare, on Twitter at @jagare_ulrika, or on Instagram @datarush.

    CHAPTER 1

    Balancing the AI Investment

    Making a strategic decision to invest in AI is not just like any other decision. It's not only the financial aspect of the investment you must consider but the transformational power of AI for your company that needs to be understood. AI has the potential to fundamentally transform the business you are doing, and for some businesses it's even a question of survival to embrace this technology as fast as possible. Few examples in the market today have shown that businesses can achieve the expected values by just doing some AI experimentation on the side. However, a surprisingly large number of companies either don't know what investing in AI means or truly believe that investing in AI means hiring a bunch of data scientists to build AI models. This attitude needs to change for more companies to gain the expected return on investment (ROI) from their investments.

    The reality of today is that AI is reshaping entire industries, making it possible to achieve previously impossible levels of scale through operational efficiencies and continuous learning as well as innovation. The reason for this is that AI automates the extraction of insights from data, detecting patterns in a way that would take weeks, months, or even years for humans to do—if at all.

    AI can be used to automate internal business processes and make them more efficient, as well as develop new and enhanced products and services in the commercial dimension. It can be used to predict what a customer is most likely to buy and to automatically detect manufacturing inefficiencies or fraudulent behavior. A retailer can use AI to predict the volume of traffic in a store on a given day and use that prediction to optimize its staffing. A bank can use AI to infer the market value of a home, based on its size, characteristics, and neighborhood, which, in turn, lowers the cost of appraisals and expedites mortgage processing. Autonomous vehicles are another interesting area of applied AI. There are not only AI capabilities built into an autonomous vehicle, but the vehicle also includes sensors that capture and encode data about the world. This could be seen as a brain that reasons and makes decisions. It seems that the more use cases and business segments AI is applied to, the more ideas arise in terms of where and how it can be used.

    And we're just getting started with AI.

    The fact is that modern data management and software capabilities have progressed far enough to allow any organization to capture and use its data to build, train, and validate even the most complex predictive AI models. Many companies have successfully embedded predictive models in their core business capabilities, which has empowered them to build game-changing products and services that would otherwise have been unachievable. And in doing so, they've proven that artificial intelligence is changing the business landscape forever.

    However, although most of the business opportunity comes from adopting AI at scale, only a minor part of the enterprises tends to invest in AI across multiple business areas. One possible explanation for this could be that many business leaders are still exploring AI to better understand its benefits in their specific context. Just knowing that AI can solve problems that were previously unsolvable, and that AI can answer questions enterprises didn't even know to ask, isn't enough to go all in. On top of that, there are also misguided delusions that AI can solve anything, and when it becomes apparent that it can't, confusion arises about the true business value of AI. Hence, experience shows that achieving business success from AI requires experimental and incremental approaches to adoption, but it should also be acknowledged that introducing AI at scale is a transformational and challenging task for most large enterprises.

    There is no silver bullet to succeed with your AI investment, but there are some fundamental aspects that should be driving your objectives and realization plans, and that includes a balanced approach to AI. In this chapter you will find out what that means and why it is important for your business. You'll learn why it's vital to approach your AI solution from an operational perspective from the get-go. The chapter will begin by defining AI and by sorting out what AI is in relation to other related concepts such as machine learning (ML), automation, and robotics, just to mention a few.

    Understanding the AI life cycle is key and will be sorted out in relation to defining some of the operational fundamentals you need to address in order to succeed with your AI investment. I'll also clarify the importance of operating AI in the context of the AI life cycle.

    Finally, this chapter will address why you need to put more effort on making your AI model operational than you put on developing your AI model. Understanding and accepting this is a first major step toward embracing an operational mind-set to AI, which is vital in order to succeed with your AI investment.

    Defining AI and Related Concepts

    Artificial intelligence (AI) refers to the ability in a computer program and in robots to emulate humans' and animals' natural intelligence. This refers primarily to cognitive functions such as the ability to learn from experience, to understand natural languages, and to solve problems, but also to tasks such as planning a sequence of activities and generalizing between situations. As more and more companies start to realize how AI can benefit their specific business, the uses of AI expand by the minute. Some examples of areas where AI is currently being applied are:

    Voice and face recognition

    Language translation

    Chat bots

    Digital assistants

    Image recognition

    Recommendation engines

    Self-driving cars

    In relation to AI, it's worth mentioning the term data science. Data science can be defined as an interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data and that applies knowledge and actionable insights from data across a broad range of application domains.

    Artificial intelligence and data science are unfortunately often used interchangeably in the industry, which sometimes causes confusion since there are some differences between the concepts. Whereas data science is a broader term than AI and should be seen as a comprehensive procedure, AI is instead a set of modeling techniques that a data scientist uses to develop models. It's also worth noting that contemporary AI used in the world today is artificial narrow intelligence. Under this form of intelligence, computer systems do not have full autonomy and consciousness like human beings; rather, they are only able to perform tasks that they are trained for. However, some prioritized objectives within AI research include machine reasoning, knowledge representation, machine planning and learning, natural language processing for communication, computer vision, and the ability to move and manipulate objects. Keep in mind, though, that artificial general intelligence (AGI) and artificial consciousness (or singularity, as it is also referred to) are still in a conceptual stage, and their real-world use is far from mature. The theory of technological singularity refers to a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. However, most researchers in the industry can't agree on when AGI will be ready. Some estimate somewhere between 2040 and 2050 at the earliest.

    Machine learning (ML) is the use of computer algorithms that improve automatically through experience based on patterns and deviations in data. ML is seen as a subset of AI. Machine learning algorithms build a mathematical model based on sample data known as training data to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications where it's difficult or insufficient to develop conventional algorithms to perform the needed tasks.

    These basic algorithms for teaching a machine to complete tasks and classify like a human date back several decades. But if ML isn't new, why is there so much interest today? Well, the fact is that complex ML algorithms—for example, using neural network techniques—need a lot of data and computing power to produce useful results. Today, we have more data than ever, and computing power is pervasive and cheap. The past few decades have seen massive scalability of data and information, allowing for much more accurate predictions than were ever possible in the long history of ML. Machine learning algorithms are therefore now better than ever and widely available in open source software. However, for more simple ML models the big-data revolution was more important than easy access to computational power.

    Here are some common usage scenarios for ML:

    Predicting a potential value

    Estimating a probability

    Classifying an object

    Grouping similar objects together

    Detecting relations

    Finding outliers

    There are many different types of ML algorithms, and each class works differently. In general, ML algorithms begin with an initial hypothetical model, determine how well this model fits a set of data, and improve the model iteratively. This training process continues until the algorithm learning is optimized or the user stops the process. Learning can be supervised, unsupervised, or semi-supervised (see Figure 1.1).

    Schematic illustration of overview of supervised, unsupervised, and semi-supervised learning

    Figure 1.1: Overview of supervised, unsupervised, and semi-supervised learning

    Supervised learning (SL) needs structured and labeled data to run and is best used for classification of data or for regression analysis, or both. Classification refers to the problem of identifying in which category (sub-population) an observation (or observations) belongs to. Regression analysis in the context of ML usually refers to building a prediction model.

    Unsupervised learning (UL), in contrast, uses unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own. This technique is mostly used for clustering and anomaly detection. Clustering in this context refers to the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar to each other than to those in other groups (clusters). Anomaly detection is the identification of rare items, events, or observations that raise suspicion by differing significantly from the majority of the data. Typically, the anomalies translate to some kind of problem, such as bank fraud, network problems, medical problems, or even errors in a text. Anomalies are also referred to as outliers, novelties, noise, deviations, and exceptions.

    Semi-supervised learning takes a middle ground. It uses a small amount of labeled data to strengthen a larger set of unlabeled data. Semi-supervised learning is especially useful for medical images, where a small amount of labeled data can lead to a significant improvement in accuracy.

    Reinforcement learning (RL) is another technique that's turned out to be valuable for certain use cases. Reinforcement learning is an area of ML focused on how intelligent agents ought to take actions in an environment in order to maximize reward (see Figure 1.2). Reinforcement learning focuses on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).

    Schematic illustration of reinforcement learning uses intelligent agents to make decisions.

    Figure 1.2: Reinforcement learning uses intelligent agents to make decisions.

    The RL training environment is typically stated in the form of a Markov decision process (MDP), because many RL algorithms for this context use dynamic programming techniques. In mathematics, an MDP is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming.

    The typical framing of an RL scenario is that a basic RL agent AI interacts with its environment in discrete-time steps. At each time the agent receives the current state and reward. The agent then chooses an action from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state, and the reward associated with the transition is determined. The goal of an RL agent is to learn a policy that maximizes the expected cumulative reward.

    So, how does deep learning (DL) relate to ML? Deep learning is defined as a subset of ML where artificial neural networks—algorithms built around the neural structure of the human brain—learn from data. The same way human beings learn from day-to-day events over time, a DL algorithm executes functions repeatedly and continuously learns and adjusts itself to improve accuracy. They are called DL algorithms because the neural networks have various (deep) hidden layers that enable learning of complex patterns in large amounts of data, as shown in Figure 1.3.

    Schematic illustration of deep learning includes a neural network of hidden layers.

    Figure 1.3: Deep learning includes a neural network of hidden layers.

    Deep learning is useful because it performs well on tasks such as image and speech recognition, whereas other ML techniques perform poorly.

    Another concept closely related to AI is automation. AI is often confused with automation, yet the two are fundamentally different. Whereas AI aims to mimic human intelligence decisions and actions, automation focuses on streamlining repetitive, instructive tasks, usually with the objective to save time and money, and gives employees an opportunity to move on and upskill themselves to be able to handle more complex tasks.

    So, what about robotics? A lot of people wonder if robotics is a subset of AI or if they are the same thing. Robotics is an interdisciplinary field that integrates computer science and engineering. It involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help humans. Robotics develops machines that can substitute for humans and replicate human actions. Robots can be used in many situations and for many purposes, but today many are used in dangerous environments, in manufacturing processes, or where humans cannot survive—for example, in space, under water, in high heat, and in the cleanup and containment of hazardous materials and radiation.

    Robots can take on any shape and form, but some are made to resemble humans in appearance. This is said to help in the acceptance of a robot for behaviors and tasks that are usually performed by people. Such robots attempt to replicate walking, lifting, speech, cognition, or any other human activity.

    So, does that mean robotics is a branch of AI? Well, it's surprisingly difficult to get experts to agree exactly what actually constitutes a robot. Some people say that a robot must be able to think and make decisions. However, there is no standard definition of robot thinking. Requiring a robot to think suggests that it has some level of AI, and however you choose to define a robot, the fact is that robotics mostly involves designing,

    Enjoying the preview?
    Page 1 of 1