Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Generative AI - From Big Picture, To Idea, To Implementation
Generative AI - From Big Picture, To Idea, To Implementation
Generative AI - From Big Picture, To Idea, To Implementation
Ebook292 pages4 hours

Generative AI - From Big Picture, To Idea, To Implementation

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Are you a business leader, manager, or executive? Do you feel confident and ready to embrace the potential of AI and stay ahead of the game in the fast-paced business world?

Our AI 101 online Book is designed exclusively for business leaders and executives like you. In the coming months and years, AI is set to revolutionize the business landscape, presenting both exciting opportunities and potential risks.

Executives, Managers, and Business leaders like you must be able to unlock the power of AI to empower their businesses for the future! But you need to be able to identify the hype from real business opportunities. And risks!

Equip yourself and your team with the knowledge needed to thrive in this AI-driven era. You do NOT need to learn how to code, use machine learning, or deep learning algorithms as a manager!

    1. Most AI-related Books teach you HOW to code and use the technical elements of AI.
    2. Here you learn the WHY of AI and the BUSINESS IMPLICATIONS of AI.
    3. And WHAT you as a manager have to do.

Not only on an AI implementation level. But (even more so) on a human level! Because every AI project will fail if you don't know how to get your people on board! And you will know how to do this after completing this Book.

Discover the Why, Impact, and Plan for AI Adoption in your business. Don't miss this chance to stay ahead in the AI revolution. Our short, professional, and focused chapters allow you to dive into specific areas of interest, tailoring your learning experience to suit your needs.

Unlock the secrets to harnessing AI's full potential for your business success! Secure your place at the forefront of the AI-driven future.

LanguageEnglish
Release dateMar 21, 2024
ISBN9798224974313
Generative AI - From Big Picture, To Idea, To Implementation

Read more from Sadanand Pujari

Related to Generative AI - From Big Picture, To Idea, To Implementation

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Generative AI - From Big Picture, To Idea, To Implementation

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Generative AI - From Big Picture, To Idea, To Implementation - SADANAND PUJARI

    Framework to Assess Organization's Maturity For Artificial Intelligence

    In this chapter, I would like to introduce a framework that you can use to assess an organization for its maturity to deploy artificial intelligence and for its maturity to derive value out of the AI deployment. You can do this for your organization or for your client's organization. I have curated these seven factors out of my experiences in deploying AI in my client organizations and in seeing the challenges faced by such organizations firsthand. Let's start with data availability. AI is hungry for data. When I say data, I mean large volumes of data. So data availability is a key requirement. It is not just data, but data at a very granular level and data at multiple levels. Let us say you are in an insurance company trying to detect fraud in motor claims.

    You need to know the factors that are driving fraud like age. You need to know the factors that are driving fraud like age, time of accident, a place where the accident occurred, profile of vehicle and data for all these factors should be properly tied to past incidents of fraud. Many organizations may not be ready for this. Some may not be tracking even incidences of fraud, even if the fraud where tracked data for all the factors may not be known or the organization may not even know the comprehensive list of factors that are driving fraud. Even if all the data, 40 factors were available, your linkage of these data to past incidents of fraud may not be there. So if you are not ready with this level of data, you need to first build the pipeline that is the data pipeline.

    And you can start an initiative involving artificial intelligence in about a year's time because building the pipeline takes time. That is no shortcut. OK, next is process standardization. If let's say you're trying to automate your invoice process, your process needs to be standardized first before you pursue automation and the process needs to be documented. Also, you may say that you have lots of process documents, but the level of granularity required for automation purposes is vastly different. You need to know which icon in the app needs to be clicked. Is it a single click or a double click or a right click where exactly the invoice document does invoice number appear. Is it in the third line or fourth line from the top or from the bottom?

    That is the level of granularity we are talking about here. These things may not be readily available with organizations. Such process activities would even be intuitive to process operators. So there are lots of challenges even in the documenting process to this level of granularity. So assess where you are. That's very important. The number one reason for many RPA that is a robotic process. Automation, deployments to fail is the lack of process standardization and lack of granular level process documentation. The next one is forecast accuracy, do you measure forecast accuracy of your process or your financial process?. What is forecast accuracy ? if let's say you tell your management that you will achieve 10 million this month in your inside sales process, are you achieving 10 million or is it ten point one million or ten point nine million or is it eight point nine million? These things matter for planning because your supply chain can't under plan or over plan because both will have an adverse financial impact.

    Forecasting is a process that can be applied in any industry and not necessarily to processes that have a clear financial linkage like the sales process you can try even in a complaint handling or any process in backend operations to in fact, forecasting can be among the first processes for which you can try AI. The key point I would like to make is if your forecasting process is already mature, it makes it that much easier to deploy AI. And every organization would want to have a high forecast accuracy because a higher forecast accuracy would mean that your process is predictable, right? We all want predictability, right. The next one is does your organization have past experiences in managing proof of concept deployments for new technologies, nurturing a new initiative, especially a new tech initiative, involves multiple aspects and challenges like change management.

    The process of pitching in for the initiative with your management calculating ROI, getting the sign off with the finance team and tracking that number also all these experiences can be quite helpful when you're introducing a new technology like A.I. Do you have a dedicated staff member to manage the A.I. deployment. That is the next aspect, even if you've outsourced the IT department. the service vendor, it makes sense to have a full time staff member who is knowledgeable in AI for driving the high tech initiative like AI. Also an employee, who is internal to your organization, will be able to navigate the dynamics of your organization better, and you could fix accountability, too, with that dedicated resource. The key here is that the dedicated resource should know enough about AI, otherwise it's not going to help. OK.

    Does the organization, especially the top management of the organization? Having a clear idea about what AI is? and how it compares against parallel technologies like RPA, Cloud Internet of Things or IOT and even Industry 4.0 is very important. There is a lot of confusion around these overlapping technologies. And due to rampant overselling, organizations with gullible executives who lack this knowledge are paying a heavy price. Last but not least, is top management ready to support A.I.. Now, maybe the organization has got other priorities and the management team needs attention somewhere else. AI deployment can't be run just by IT alone. IT is only an enabler. You need the support of the entire organization. Evaluate this clearly. So use these seven factors to assess where your organization stands. These are subjective factors, no doubt about it. But they are very helpful to assess where you stand today. You can do this exercise even as a team. I have shared this framework right at the beginning so you can use this framework and the factors in this framework as a reference as you go through the rest of the Book.

    Strong AI Vs Weak AI

    The interest in artificial intelligence has increased tremendously in recent times due to ChatGPT and generative AI. Generative AI is the technology that fuels ChatGPT. But before understanding generative AI, let us understand what AI is. This understanding is very important and it will help to avoid some fundamental mistakes in the deployment. We are in the midst of change. Not a small change, but a quantum shift. We are in the fourth Industrial Revolution. It is also called Er4 or industry 4.0. Each Industrial revolution brought with it new ways of doing business, new ways of working, and new technology. Industry 4.0 is driven by many technologies and notable among them is machine learning, cloud and advances in sensor technology that has made industrial equipment to interact with one another and transmit their data.

    Advances in sensor technology are fueling the industrial Internet of Things or IoT. Other technologies like augmented reality are also important players in industry 4.0. The third Industrial Revolution introduced computers and automation in a big way. The second Industrial Revolution played a pivotal role in the introduction of mass production and many other management concepts. The concept of five day workweek and three shifts in a day with eight hours for each shift, were also introduced during the Second Industrial Revolution. Industry 1.0 is more about mechanization. Now we have generative AI that is fueling the rise of ChatGPT and other applications. I am tempted to say that generative AI is heralding the fifth industrial Revolution.

    I hope you appreciate the background I am trying to bring, and it brings us to the fundamental question: what is AI? Keep this question in your mind as we explore another question. We will answer all the questions shortly. Which movie comes to your mind when you think of AI? It could be any of the movies shown here or some other movie. The movies in the top are Hollywood movies, and the ones below are Indian movies. Now a related question is all about what you see in movies. Or is it something else? Well, what you see in movies is actually known as strong AI. Here computers are thinking at the level of human beings. We are not there yet. This type of strong AI is also known as artificial general intelligence and artificial superintelligence.

    If you are not at the level of strong AI, where are we today? We are actually at the level of peak I. We call the current state of AI only to distinguish it from the strong AI. It is not really weak. PCA is about solving problems by detecting patterns in data. This is the dominant mode of AI today. A little earlier I used the term pattern. So what is a pattern? We see patterns all around us. We see patterns even in our dresses. You may say that you are wearing a checked shirt and the check is a pattern. So a pattern is something that gets repeated. But what about the frequency of repetition? Should it not be consistent? Yes, it has to be consistent. Is that all too a pattern? I said pattern is something that gets repeated. That something is the characteristic. It could be a pattern in the numbers or images.

    So pattern is a consistent recurring characteristic. We use patterns to solve problems in BCCi. BCCi is all about pattern recognition. And pattern recognition is a type of human intelligence. We are bringing that human intelligence into software and hence the name artificial intelligence. We are not bringing other forms of human intelligence like emotion into the software. That is why the current state is called BCCi. Let us now explore the patterns in this visual. What patterns do you see in the visual on your screen? This is a line graph or run chart of sales in a company. Monthly data is plotted for five years starting from 2010 to 2014. So what are the patterns in this visual? Peaks in any calendar year are happening in the month of July. Gloves are. The minimum point in a calendar year is consistently seen in January.

    What other patterns are there in this visual? That is an increase and a decrease somewhere in the April -May time frame every year. The increase and subsequent decrease are more pronounced in some years and less pronounced in other years. Are there any other patterns in this visual? There is one left, that is, the peak is increasing at a nearly uniform rate. So merely by looking at the visual, we have identified these four patterns. Can we use this to forecast the future? Absolutely all the patterns we observed earlier or in the future data. So we identified the patterns and we used the patterns to predict the future. This is what happens in AI.

    In this scenario, the number of data points is very few. So we were able to do this by visual observation. In real life though, we will use software for pattern recognition and predicting the future. Whenever there is a discussion about AI, there is a mention of the term machine learning. What is this machine learning and how does it relate to AI? While AI is about achieving human intelligence in a system or software, it is machine learning that drives AI. Machine learning is the engine that enables artificial intelligence. So machine learning is the core technology that drives. Deep learning is the more advanced form of machine learning, and deep learning is more suited for analysis involving images, audio, chapter and text data.

    ChatGPT uses deep learning. Finally, please remember that machine learning is actually at the intersection of three disciplines: math, programming, and domain knowledge. Why do you think math is important? We learned about patterns earlier. How are patterns represented? Patterns are represented as numbers. Even an image or text is represented as numbers before it is processed by a software program. The third aspect in machine learning is domain knowledge. Your ability to construct an algorithm to identify patterns is influenced by domain knowledge, and domain knowledge plays a very important role in machine learning. A machine learning application developed without adequate domain knowledge results in a low forecast accuracy. That is, the predictions are not accurate or reliable.

    Four Types of Data Analytics

    In this chapter, I would like to introduce the four types of data analysis approaches. This understanding is very important. Each type has a specific role in the analysis process. Helping businesses to make informed decisions to derive maximum value out of data analysis. We must understand what type of analysis is required for this situation. Is it descriptive or diagnostic or prescriptive or predictive? Each one calls for a different approach. Why is this relevant in the context of machine learning? Descriptive analysis helps you to summarize and understand data. Diagnostic analysis provides insights into why certain patterns exist in data. Predictive analysis forecasts future trends, which is a core goal of many machine learning models.

    Prescriptive analysis offers suggestions on how to handle future scenarios, aiding in the decision making process. So machine learning is used widely in diagnostic, prescriptive, and predictive analysis. Opportunity for machine learning in descriptive analysis is really low. In the Descriptive analysis category, we are essentially summarizing what has happened in the context of personal wealth. It is like asking, what was my average weight gain this month in the sales domain? It would translate to querying what was our total sales output last quarter. Moving on to diagnostic analysis. It revolves around understanding why something happened. If you noticed a sudden weight gain last week, you would be trying to find the reasons behind that. Similarly, in sales, if there was a significant surge in sales in the month of, let's say, January, the focus would be on pinpointing what fueled that surge.

    Next, we have predictive analysis. Here we are trying to forecast what could potentially happen in the future based on current data. In the health scenario, it would mean projecting your weight six months down the line based on your current habits. In the sales area, it is about leveraging existing trends to predict sales in the upcoming quarter. It is in prediction analysis that we will be using machine learning and deep learning solutions a lot. Lastly, we arrive at prescriptive analysis where we focus on recommending actions you can take to affect desired outcomes in personal health. It is about outlining the lifestyle changes needed to maintain or reduce your weight. For sales, it would imply strategizing to enhance sales performance. In the next quarter.

    What Gets Measured Gets Improved

    If you want to reduce weight, what could you possibly do apart from exercising or cutting down on calories? What could be the simplest strategy to reduce your weight? Well, what about measuring your weight every day, preferably at the same time of the day? Research has shown that measuring your weight every day would propel you to take actions towards your goal. And the philosophy behind that approach is what gets measured gets improved. This philosophy is pivotal to machine learning or any data analysis. Staying on weight gain example. What do you think could be responsible for being overweight? It could be a variety of factors from eating too much, taking medications like steroids, stress, not having a peaceful and adequate sleep every day, hormonal imbalance, and even genetics.

    So all the causes, from stress to medication to sleep are driving the weight gain or overweight scenario. And all these causes are called independent variables in statistics. Weight is dependent on these causes or factors and hence called dependent variables. After all, there is dependency, right? Dependent variable is known by many other names like output variable, reference variable and labeled data. I could express the scenario of dependent and independent variables as an equation y equals f of x, or as y is a function of X1X2X3, x four, x five, and x six. I'm sure many of you would have learnt about this equation in your high school. This equation is fundamental in machine learning in the agriculture scenario.

    If I apply fertilizer regularly, there will be a healthy plantation represented by a growth in the height of the plant. So what would be the dependent and independent variables? In this case, the height of the plant would be the dependent variable because it is dependent on fertilizer. Fertilizer is not dependent on anything else and hence we will call that as an independent variable. If you don't take enough rest or get enough sleep before the exam, you may not be able to concentrate on the exam and hence you may score poorly. You may actually sleep in the exam hall too, as it ever happened to you. So in this scenario, grade is the dependent variable and the number of hours of sleep is the independent variable.

    In an automobile scenario, carbon emission of a vehicle is driven by volume of vehicle and weight of vehicle. So carbon emission is dependent on the output variable. In an insurance scenario, charges to be paid by a customer for insuring himself or herself is dependent on many factors like age, body mass index or BMI, sex, smoker status, and so on. An insurance company would demand a higher premium from you. If, in the assessment of the insurance company, you are likely to die sooner. So insurance companies go to great lengths, do a lot of analysis, use machine learning techniques in determining the right cost so that the risk is adequately addressed. This is part of underwriting or risk management by an insurance company.

    Why is an understanding of dependent and independent variables important in machine learning? It is important because the kind of dependent variable, the presence or lack of dependent variable in the data set determines the kind of algorithms that we will use. Now look at the two examples on your screen. It is from the insurance industry. The insurance industry can use these factors to determine the insurance premium charges, which is numeric, or use these factors to determine whether an insurance policy should be issued or not, which is a non numeric variable. Both scenarios are possible, and both scenarios can be assessed by using the same set of independent variables. The kind of patterns that will form part of the numeric output will differ from that of a non numeric output.

    If the output or dependent variable is numeric, we will use a regression algorithm. And if the output is non numeric like a yes or no scenario, we will use a classification algorithm. In fact, regression and classification algorithms belong to a type of machine learning known as supervised learning. That is, we supervise our. The algorithms to find the patterns relevant to the objective as represented by the dependent or output variable. We understood supervised learning, but what is unsupervised learning? We use unsupervised learning when our dataset lacks a dependent variable. It is quite common in real life to encounter datasets without a dependent variable. Often, organizations don't possess information about the dependent variable linked to various independent factors or variables.

    Collecting data for both dependent and independent variables takes a lot of time and effort. Many organizations simply don't have this detailed data readily available. So what is their solution? They begin with unsupervised learning and eventually transition to supervised learning. Generally speaking, supervised learning accuracy is higher that doesn't mean that unsupervised learning is inferior. Unsupervised learning is very effective in detecting clusters and unearthing outliers in data. Outlier detection is very useful in areas like fraud detection.

    A different take on the types of algorithms

    If you're still not clear of the differences between supervised and unsupervised, or between classification and regression, this chapter will help you. I am inputting three apples for the system to process. All three apples are red in color and they also have a green leaf at the top. The apples are of a particular shape. I am inputting these three images and I am providing a reference for the system to understand that all three images being fed into the system are apples. That reference helps the computer to understand if a similar image is fed into the system in future. System can understand that it is an apple. The system basically looks for the characteristics in an image.

    In this case it is the color and shape and also the presence of green leaves. So these three characteristics help to define an apple. Those three characteristics are present. The system will say it is an apple if the characteristics are not present like in this type of apple. The system will say it is not an apple. Look at the way the system is worded. The response. It did not raise an error. It said not an apple. All that the system knows is an apple. Based on certain characteristics we have defined, and everything else is not an apple for the system. In this case, the apple is pink in color and not wine red. Hence it categorizes the pink colored apple as not an apple. If I feed this type of apple, what do you think the system will say? It will still say not an apple, because the protrusion of the green leaf is on the right side, while the apples in our input are on the left side.

    Humans can overlook this difference, but systems cannot. So this is supervised learning. That is, we tell the system to learn to identify an object based on certain characteristics. In unsupervised learning, all that the systems can do is group the fruits as apples, oranges, and bananas. Anything other than that is an outlier. So unsupervised learning is very helpful in customer segmentation and also in fraud detection. How do you think unsupervised learning is useful in fraud detection? An outlier is a point of interest in any analysis. An outlier could be fraud. I'm not saying they are frauds, but suspicion is definitely there. Outlier is something that does not fit in with the rest of the data points. Supervised learning algorithms can be divided into regression and classification based on what I am trying to predict.

    Enjoying the preview?
    Page 1 of 1