0109 Dwayne Silvapinto

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

SVKM’S NMIMS Deemed to be UNIVERSITY

MUKESH PATEL SCHOOL OF TECHNOLOGY MANAGEMENT AND


ENGINEERING

A REPORT

ON

FUTURE TRENDS IN ARTIFICIAL INTELLIGENCE

NAME OF STUDENT : MR. DWAYNE SILVAPINTO


REG. NO. : MPSTMEBTech-202100109
STUDENT NO : 70322000146
1

Artificial intelligence in Facial Recognition:


Despite having a great deal of negative press recently, facial recognition technology
is regarded as the Artificial Intelligence future due to its immense popularity. It
promises an immense growth in 2020, and further.

Facial Recognition — Artificial Intelligence Application

A facial recognition system is a technology capable of identifying or verifying a


person from a digital image or a video frame from a video source. There are multiple
methods in which facial recognition systems work, but in general, they work by
comparing selected facial features from given image with faces within a database.
It is also described as a Biometric Artificial Intelligence based application that can
uniquely identify a person by analysing patterns based on the person's facial textures
and shape.

Applications:-

Mobile platforms

Social media
Social media platforms have adopted facial recognition
capabilities to diversify their functionalities in order to
attract a wider user base amidst stiff competition from
different applications.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


2

Founded in 2013, Looksery went on to raise money for its face modification app on


Kickstarter. After successful crowdfunding, Looksery launched in October 2014. The
application allows video chat with others through a special filter for faces that
modifies the look of users. While there is image augmenting applications such
as FaceTune and Perfect365, they are limited to static images, whereas Looksery
allowed augmented reality to live videos. In late 2015, SnapChat purchased
Looksery, which would then become its landmark lenses function.

SnapChat's animated lenses, which used facial recognition technology,


revolutionized and redefined the selfie, by allowing users to add filters to change the
way they look. The selection of filters changes every day, some examples include
one that makes users look like an old and wrinkled version of themselves, one that
airbrushes their skin, and one that places a virtual flower crown on top of their head.

DeepFace is a deep learning facial recognition system created by a research group


at Facebook. It identifies human faces in digital images. It employs a nine-
layer neural net with over 120 million connection weights, and was trained on four
million images uploaded by Facebook

Face ID
Apple introduced Face ID on the flagship iPhone X as a biometric authentication
successor to the Touch ID, a fingerprint based system. Face ID has a facial
recognition sensor that consists of two parts: a "Romeo" module that projects more
than 30,000 infrared dots onto the user's face, and a "Juliet" module that reads the
pattern. The pattern is sent to a local "Secure Enclave" in the device's central

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


3

processing unit (CPU) to confirm a match with the phone owner's face. The facial
pattern is not accessible by Apple. The system will not work with eyes closed, in an
effort to prevent unauthorized access.

The technology learns from


changes in a user's
appearance, and therefore
works with hats, scarves,
glasses, and many
sunglasses, beard and
makeup.

Deployment in security services

Commonwealth
The Australian Border Force and New
Zealand Customs Service have set up
an automated border processing system
called SmartGate that uses face
recognition, which compares the face of
the traveller with the data in the e-
passport microchip. All Canadian
international airports use facial
recognition as part of the Primary
Inspection Kiosk program that compares
a traveller face to their photo stored on
the ePassport. This program first came
to Vancouver International Airport in early 2017 and was rolled up to all remaining
international airports in 2018–2019. The Tocumen International Airport in Panama
operates an airport-wide surveillance system using hundreds of live face recognition
cameras to identify wanted individuals passing through the airport.

United States
The U.S. Department of State operates one of the largest face recognition systems
in the world with a database of 117 million American adults, with photos typically
drawn from driver's license photos. Although it is still far from completion, it is being
put to use in certain cities to give clues as to who was in the photo. The FBI uses the
photos as an investigative tool, not for positive identification.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


4

The FBI has also instituted its Next Generation Identification program to include face


recognition, as well as more traditional biometrics like fingerprints and iris scans,
which can pull from both criminal and civil databases. The federal General
Accountability Office criticized the FBI for not addressing various concerns related to
privacy and accuracy.

China
Lots of public places in China are implemented with facial recognition equipment,
including railway stations, airports, tourist attractions, expos, and office buildings

Advantages and disadvantages

Compared to other biometric system

One key advantage of a facial recognition system that it is able to person mass
identification as it does not require the cooperation of the test subject to work.
Properly designed systems installed in airports, multiplexes, and other public places
can identify individuals among the crowd, without passers-by even being aware of
the system.
However, as compared to other biometric techniques, face recognition may not be
most reliable and efficient. Quality measures are very important in facial recognition
systems as large degrees of variations are possible in face images. Factors such as
illumination, expression, pose and noise during face capture can affect the
performance of facial recognition systems. Among all biometric systems, facial
recognition has the highest false acceptance and rejection rates, thus questions
have been raised on the effectiveness of face recognition software in cases of
railway and airport security.

Weaknesses
Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute in 2008,
describes one obstacle related to the viewing angle of the face: "Face recognition
has been getting pretty good at full frontal faces and 20 degrees off, but as soon as
you go towards profile, there've been problems." Besides the pose variations, low-
resolution face images are also very hard to recognize. This is one of the main
obstacles of face recognition in surveillance systems.
Face recognition is less effective if facial expressions vary. A big smile can render
the system less effective. For instance: Canada, in 2009, allowed only neutral facial
expressions in passport photos.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


5

There is also inconstancy in the datasets used by researchers. Researchers may


use anywhere from several subjects to scores of subjects and a few hundred images
to thousands of images. It is important for researchers to make available the
datasets they used to each other, or have at least a standard dataset.
Data privacy is the main concern when it comes to storing biometrics data in
companies. Data stores about face or biometrics can be accessed by the third party
if not stored properly or hacked. In the Techworld, Parris adds (2017), “Hackers will
already be looking to replicate people's faces to trick facial recognition systems, but
the technology has proved harder to hack than fingerprint or voice recognition
technology in the past.”

Ineffectiveness
Critics of the technology complain that the London Borough of Newham scheme has,
as of 2004, never recognized a single criminal, despite several criminals in the
system's database living in the Borough and the system has been running for several
years. "Not once, as far as the police know, has Newham's automatic face
recognition system spotted a live target." This information seems to conflict with
claims that the system was credited with a 34% reduction in crime (hence why it was
rolled out to Birmingham also). However it can be explained by the notion that when
the public is regularly told that they are under constant video surveillance with
advanced face recognition technology, this fear alone can reduce the crime rate,
whether the face recognition system technically works or does not. This has been
the basis for several other face recognition based security systems, where the
technology itself does not work particularly well but the user's perception of the
technology does.
An experiment in 2002 by the local police department in Tampa, Florida, had
similarly disappointing results.
A system at Boston's Logan Airport was shut down in 2003 after failing to make any
matches during a two-year test period.
In 2014, Facebook stated that in a standardized two-option facial recognition test, its
online system scored 97.25% accuracy, compared to the human benchmark of
97.5%.
In 2018, a report by the civil liberties and rights campaigning organisation Big
Brother Watch revealed that two UK police forces, South Wales Police and
the Metropolitan Police, were using live facial recognition at public events and in
public spaces, in September 2019, South Wales Police use of facial recognition was
ruled lawful.
Systems are often advertised as having accuracy near 100%; this is misleading as
the studies often use much smaller sample sizes than would be necessary for large
scale applications. Because facial recognition is not completely accurate, it creates a

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


6

list of potential matches. A human operator must then look through these potential
matches and studies show the operators pick the correct match out of the list only
about half the time. This causes the issue of targeting the wrong suspect.

Deep learning
Deep learning is a class of machine learning algorithms that uses multiple layers to
progressively extract higher level features from the raw input. For example, in image
processing, lower layers may identify edges, while higher layers may identify the
concepts relevant to a human such as digits or letters or faces.
Deep learning architectures such as deep neural networks, deep belief
networks, recurrent neural networks and convolutional neural networks have been
applied to fields including computer vision, machine vision, speech
recognition, natural language processing, audio recognition, social network
filtering, machine translation, bioinformatics, drug design, medical image analysis,
material inspection and board game programs, where they have produced results
comparable to and in some cases surpassing human expert performance.
Artificial neural networks (ANNs) were inspired by information processing and
distributed communication nodes in biological systems. ANNs have various
differences from biological brains. Specifically, neural networks tend to be static and
symbolic, while the biological brain of most living organisms is dynamic (plastic) and
analog.

Applications

Automatic speech recognition


Percent phone
Method
error rate (PER) (%)
Randomly Initialized RNN 26.1
Bayesian Triphone GMM-HMM 25.6
Hidden Trajectory (Generative) Model 24.8
Monophone Randomly Initialized DNN 23.4
Monophone DBN-DNN 22.4
Triphone GMM-HMM with BMMI Training 21.7
Monophone DBN-DNN on fbank 20.7
Convolutional DNN 20.0

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


7

Large-scale automatic speech recognition is the first and most convincing successful
case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks that
involve multi-second intervals containing speech events separated by thousands of
discrete time steps, where one time step corresponds to about 10 ms. LSTM with
forget gates is competitive with traditional speech recognizers on certain tasks.
The initial success in speech recognition was based on small-scale recognition tasks
based on TIMIT. The data set contains 630
speakers from eight major dialects of American
English, where each speaker reads 10
sentences. Its small size lets many configurations
be tried. More importantly, the TIMIT task
concerns phone-sequence recognition, which,
unlike word-sequence recognition, allows weak
phone bigram language models. This lets the
strength of the acoustic modeling aspects of
speech recognition be more easily analyzed. The
error rates listed below, including these early
results and measured as percent phone error rates (PER), have been summarized
since 1991.
The debut of DNNs for speaker recognition in the late 1990s and speech recognition
around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight
major areas:

 Scale-up/out and accelerated DNN training and decoding


 Sequence discriminative training
 Feature processing by deep models with solid understanding of the underlying
mechanisms
 Adaptation of DNNs and related deep models
 Multi-task and transfer learning by DNNs and related deep models
 CNNs and how to design them to best exploit domain knowledge of speech
 RNN and its rich LSTM variants
 Other types of deep models including tensor-based models and integrated
deep generative/discriminative models.

All major commercial speech recognition systems (e.g.,


Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple
Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.)
are based on deep learning.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


8

Image recognition
A common evaluation set for image classification is the MNIST database data set.
MNIST is composed of handwritten digits and includes 60,000 training examples and
10,000 test examples. As with TIMIT, its small size lets users test multiple
configurations. A comprehensive list of results on this set is available.
Deep learning-based image recognition has become "superhuman", producing more
accurate results than human contestants. This first occurred in 2011.

Deep learning-trained vehicles now interpret 360° camera views. Another example is


Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human
malformation connected to a large database of genetic syndromes.

Visual art processing


Closely related to the progress that has been made in image recognition is the
increasing application of deep learning techniques to various visual art tasks. DNNs
have proven themselves capable, for example, of a) identifying the style period of a
given painting,

Natural language processing


Neural networks have been used for implementing language models since the early
2000s. LSTM helped to improve machine translation and language modelling.
Recent developments generalize word embedding to sentence embedding.
Google Translate (GT) uses a
large end-to-end long short-term
memory network. Google Neural
Machine Translation (GNMT) uses
an example-based machine
translation method in which the
system "learns from millions of
examples." It translates "whole

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


9

sentences at a time, rather than pieces. Google Translate supports over one
hundred languages.

Drug discovery and toxicology


Research has explored use of deep learning to predict the biomolecular targets, off-
targets, and toxic effects of environmental chemicals in nutrients, household
products and drugs.

AtomNet is a deep learning system for structure-based rational drug


design. AtomNet was used to predict novel candidate biomolecules for disease
targets such as the Ebola virus and multiple sclerosis.

Customer relationship management


Deep reinforcement learning has been used to approximate the value of
possible direct marketing actions, defined in terms of RFM variables. The estimated
value function was shown to have a natural interpretation as customer lifetime value.

Recommendation systems
Recommendation systems have used deep learning to extract meaningful features
for a latent factor model for content-based music and journal recommendations.

Bioinformatics
In medical informatics, deep learning was used to predict sleep quality based on
data from wearables and predictions of health complications from electronic health
record data.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


10

Medical Image Analysis


Deep learning has been
shown to produce
competitive results in
medical application such as
cancer cell classification,
lesion detection, organ
segmentation and image
enhancement

Mobile advertising
Finding the appropriate mobile audience for mobile advertising is always challenging,
since many data points must be considered and analyzed before a target segment
can be created and used in ad serving by any ad server.  Deep learning has been
used to interpret large, many-dimensioned advertising datasets

Image restoration
Deep learning has been successfully applied
to inverse problems such as denoising, super-
resolution, inpainting, and film colorization.

Financial fraud detection


Deep learning is being successfully applied to financial fraud detection and anti-
money laundering. "Deep anti-money laundering detection system can spot and
recognize relationships and similarities between data and, further down the road,
learn to detect anomalies or classify and predict specific events"

Military
The United States Department of Defense applied deep learning to train robots in
new tasks through observation.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


11

Artificial intelligence in Heavy Industry

Artificial intelligence, in modern terms, generally refers to computer systems that


mimic human cognitive functions. It encompasses
independent learning and problem-solving. While this type of general artificial
intelligence has not been achieved yet, most contemporary artificial intelligence
projects are currently better understood as types of machine-learning algorithms,
which can be integrated with existing data to understand, categorize, and adapt sets
of data without the need for explicit programming.

While the application of artificial intelligence in heavy industry is still in its early
stages, applications are likely to include optimization of asset management and
operational performance, as well as identifying efficiencies and decreasing
downtime.

Potential benefits

AI-driven machines ensure an easier manufacturing process, along with many other
benefits, at each new stage of advancement. Technology creates new potential for
task automation while increasing the intelligence of human and machine
interaction. Some benefits of AI include directed automation, 24/7 production, safer
operational environments, and reduced operating costs.

Directed automation
AI and robots can execute actions repeatedly without any error, and design more
competent production models by building automation solutions. They are also

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


12

capable of eliminating human errors and delivering superior levels of quality


assurance on their own.
24/7 production
While humans must work in shifts to accommodate sleep and mealtimes, robots can
keep a production line running continuously. Businesses can expand their production
capabilities and meet higher demands for products from global customers due to
boosted production from this round-the-clock work performance.

Safer operational environment


More AI means fewer human
labourers performing dangerous
and strenuous work. Logically
speaking, with fewer humans and
more robots performing activities
associated with risk, the number
of workplace accidents should
dramatically decrease. It also offers
a great opportunity for exploration because companies do not have to risk human
life.

Condensed operating costs


With AI taking over day-to-day activities, a business will have considerably
lower operating costs. Rather than employing humans to work in shifts, they could
simply invest in AI. The only cost incurred would be from maintenance after the
machinery is purchased and commissioned.

Environmental impacts
Self-driving cars are potentially beneficial to the environment. They can be
programmed to navigate the most efficient route and reduce idle time, which could
result in less fossil fuel consumption and greenhouse gas (GHG) emissions. The
same could be said for heavy machinery used in heavy industry. AI can accurately
follow a sequence of procedures repeatedly, whereas humans are prone to
occasional errors.

Additional benefits of AI
AI and industrial automation have advanced considerably over the years. There has
been an evolution of many new techniques and innovations, such as advances in
sensors and the increase of computing capabilities. AI helps machines gather and

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


13

extract data, identify patterns, adapt to new trends through machine intelligence,


learning, and speech recognition. It also helps to make quick data-driven decisions,
advance process effectiveness, minimize operational costs, facilitate product
development, and enable extensive scalability.

Potential negatives

High cost
Though the cost has been decreasing in the past few years, individual development
expenditures can still be as high as $300,000 for basic AI. Small businesses with a
low capital investment may have difficulty generating the funds necessary to
leverage AI. For larger companies, the price of AI may be higher, depending on how
much AI is involved in the process.

Reduced employment opportunities


Job opportunities will grow with the advent of AI; however, some jobs might be lost
because AI would replace them. Any job that involves repetitive tasks is at risk of
being replaced. In 2017, Gartner predicted 500,000 jobs would be created because
of AI, but also predicted that up to 900,000 jobs could be lost because of it.  These
figures stand true for jobs only within the United States.

AI decision-making
AI is only as intelligent as the individuals responsible for its initial programming. In
2014, an active shooter situation led to people calling Uber to escape the shooting
and surrounding area. Instead of recognizing this as a dangerous situation, the
algorithm Uber used saw a rise in demand and increased its prices

Environmental impacts
AI trained to act on environmental variables might have erroneous algorithms, which
can lead to potentially negative effects on the environment. Algorithms trained on
biased data will produce biased results

Effects of AI in the manufacturing industry


Landing.ai, a start-up formed by Andrew Ng, developed machine-vision tools that
detect microscopic defects in products at resolutions well beyond the human vision.
The machine-vision tools use a machine-learning algorithm tested on small volumes
of sample images.
Generative design is a new process born from artificial intelligence. Designers or
engineers specify design goals (as well as material parameters, manufacturing
methods, and cost constraints) into the generative design software. The software
explores all potential permutations for a feasible solution and generates design
alternatives. The software also uses machine learning to test and learn from each

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


14

iteration to test which iterations work and which iterations fail. It is said to effectively
rent 50,000 computers [in the cloud] for an hour.

Artificial intelligence in Transportation


The transportation domain is beginning to apply Artificial Intelligence (AI) in mission-
critical tasks (for example, self-driving vehicles carrying passengers) where the
reliability and safety of an AI system will be under question from the general public.
Major challenges in the transportation industry like capacity problems, safety,
reliability, environmental pollution, and wasted energy are providing ample
opportunity (and potential for high ROI) for AI innovation.

Applications

Self-driving vehicles

One of the most ground-breaking applications AI innovation is autonomous vehicles.


Autonomous vehicles, the concept that was once merely a sci-fi fantasy, has now
become a practical reality. Although people were sceptical of this technology during
its developmental stages, driverless vehicles have already made their entry into the
transportation sector.

Autonomous taxis have already


started operating in Tokyo.
However for safety reasons, as
of now, the driver sits in the car
to take control of the taxi during
an emergency
situation. According to the maker
of this autonomous taxi, the
technology will result in reduced
cost for taxi services, which can
be helpful in increasing the public transportation modes in remote areas.

Similarly, US logistics are embracing autonomous trucks to reap numerous benefits


from it. According to a Mckinsey report, 65 percent of goods are transported via
trucks globally. And with autonomous trucks coming into the picture, the
maintenance and administration expenses will come down by about 45 percent.

For now, a majority of companies are still running their pilot projects, striving to make
self-driving vehicles flawless and safe for passengers. As this technology evolves,
self-driving vehicles will gain mass confidence and become mainstream in the
consumer realm.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


15

Traffic management

Another transportation problem that people face on a daily basis is traffic congestion.
AI is now set to solve this issue too.

Sensors and cameras embedded everywhere on roads collect the large voluminous
amount of traffic details. This data is then sent to the cloud, where analysis and
traffic pattern revelation will be done with big data analytics and an AI-powered
system. Valuable insights like traffic predictions can be gleaned from data
processing. Commuters can be provided with important details like traffic predictions,
accidents, or road blockages. Besides, people can be notified about the shortest
route to their destination, helping them travel without any hassles of traffic. This way,
AI can be used to not only reduce unwanted traffic but also improve road safety and
reduce wait times.

Tesla’s complete self-driving system will


use GPS technology to find the optimal
route to its given destination. If the car isn’t
given a destination, it can check the
owner’s calendar to determine the best
destination or take the owner home.

Delay predictions

Another burning problem faced by air transport today is flight delays. The estimated
costs due to flight delays are 39 billion dollars in the US, according to a study
conducted by researchers at the University of California, Berkeley. Along with
financial loss, flights delay negatively impact passenger’s flying experience. Negative
experience while flying can undermine a transport company’s value, which can result
in increased customer churn rate. To overcome these issues, AI comes to the air
transport industry's rescue here.

Leveraging data lake technology and computer vision, the industry can offer
exceptional service to passengers in cutting down passenger’s wait times and
enhancing their journey experience. As anything from bad weather to some technical
glitch can cause flights to be delayed, it is important to update flight details to
passengers in advance to get rid of unnecessary waiting times. With the help of
computer vision systems, continuous monitoring of the airplanes can be carried out,
eliminating the unplanned downtime. Besides, AI and machine learning components
will process real-time airplane data, historical records, and also the weather
information. On-the-spot computation will help in the revelation of hidden patterns,

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


16

which can help the air transport industry glean useful insights on other possibilities
that can cause flight delays and cancellations. This data can be forwarded to
passengers, which can help them plan their schedule accordingly.

Drone taxis

One of the most exciting and innovative AI applications in transportation is a drone


taxi. Pilotless helicopters present a unique solution to combat the carbon emissions,
eliminate traffic congestion, and reduce the need for expensive infrastructure
construction plans. Besides, drone taxis will help people reach their destination much
sooner, minimizing their commute time.

Further, rising populations have put city planners under


high pressure to ensure smart urban planning and build
infrastructure without compromising on declining
resources. Drone taxis can indeed be the real recipe to
solve all the concerns that these city planners are
striving to deal with. The recent demonstration of an
autonomous aerial vehicle in China, where 17
passengers experienced smart air mobility for the first
time, is a great indicator of similar future applications.

AI has been one of the most astounding technological innovations of humankind,


indeed. However, despite every amazing invention we have seen until now, it is
important to note that we have only scratched just the surface of AI and a lot more is
yet to be explored. The applications of AI in transportation mentioned above
showcase just a glimpse of possibilities and opportunities that the technology can
offer. Imagine how amazing and exciting the future driven by AI would be!

Autonomous truck services


In October 2016, the world’s first successful autonomous truck delivery was
completed when an Uber truck carried 50,000 cans of Budweiser beer over a
distance of 120 miles from Fort Collins to Colorado Springs, CO. Now Uber’s
autonomous trucks are delivering goods throughout Arizona. Other AV companies
are following suit.

A report by the International Transport


Forum claims autonomous delivery
vehicles will save costs, lower emissions
and improve road safety, compared with
trucks operated by humans. New
autonomous trucks will have the ability to

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


17

perform a host of delivery duties including pick up garbage, deliver packages and
food, and a numerous other services. All these services can be optimized through
advanced logistics for traffic flow.

Public transportation safety and usage optimization


Public transportation also stands to benefit from the use of AVs and the associated
logistics operations systems.

In Helsinki, Finland, trial is underway where an autonomous bus transports up to a


dozen passengers at a time through a quarter-mile route with restaurants and
saunas. The city is expected to expand the trial and provide autonomous bus
services throughout the city, in order to measure customer response and basic
operations data.

“There’s a lot of demand to solve the


last-mile problem,” said Harri
Santamala, the city’s project
coordinator, referring to the challenge of
transporting passengers from
centralized transit hubs to their final
destinations. “I think this is something
we could do with automatic buses. On a
real-time basis, we can adjust how they
drive and where they make the
connection. We’ve learned with this pilot
that you can be flexible and synchronize with this technology. We could scale this up
to the entire fleet.”

Metro Magazine suggests numerous benefits to a municipal transit system powered


by autonomous buses:

 Trip-planning information is integrated across modes and agencies (public and


private), so the general public has the ability to evaluate their travel options with
comprehensive information on travel time, cost, environmental impact, and more.

 Real-time schedules for all transportation modes are centrally available.

 Vehicles and transit schedules are “right-sized” so fleets are used effectively
and there are no more empty buses.

 Fare payment is made electronically and only one payment is needed for each
whole trip.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


18

 Travel times are generally predictable and well-communicated.

 Lower income and people with disability populations have access to all of
these services.

Artificial intelligence In Cloud Computing


On a larger scale, AI capabilities are working in the business cloud computing
environment to make organizations more efficient, strategic, and insight-driven.
Cloud computing offers businesses more flexibility, agility, and cost savings by
hosting data and applications in the cloud.

Applications

The Role of AI and Cloud Computing


According to Statista, the global value of the AI market will surpass more than an
estimated $89 billion annually by 2025. A significant percentage of that value will
occur as artificial intelligence powers cloud computing—and, in turn, as cloud
computing acts as an engine to increase the scope and impact AI can have in the
larger market.

McKinsey recently conducted a study to explore how AI could impact value creation


in a range of industries. They estimate that across 19 business areas and more than
400 potential use cases, AI could create $3.5 trillion and $5.8 trillion per year in

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


19

value. That number is actually conservative, because it reflects a specific sub-


segment of AI techniques. More broadly, McKinsey estimates the impact could be as
large as $15.4 trillion per year.

Deloitte, however, pointed out in an analysis that while AI has tremendous


capabilities to benefit companies, the need for technical talent and massive
infrastructure has made it less attainable for many organizations. That’s where the
cloud comes in. Deloitte notes, “The upshot is that these innovators are making it
easier for more companies to benefit from AI technology even if they lack top
technical talent, access to huge data sets, and their own massive computing power.
Through the cloud, they can access services that address these shortfalls—without
having to make big upfront investments. In short, the cloud is democratizing access
to AI by giving companies the ability to use it now.”

Let’s explore some of the most important and promising applications for AI and cloud
computing.

Powering a Self-Managing Cloud with AI


Artificial intelligence is being embedded into IT infrastructure to help streamline
workloads and automate repetitive tasks. Some have gone as far as predicting that
as AI becomes more sophisticated, private and public cloud instances will rely on
these AI tools to monitor, manage, and even self-heal when an issue occurs. Initially,
AI can be used to automate core workflows and then, over time, analytical
capabilities can create better processes that are largely independent. Routine
processes can be managed by the system itself, further helping IT teams capture the
efficiencies of cloud computing and allowing them to focus on higher-value strategic
activities.

Improving Data Management with AI


At the cloud level, artificial
intelligence tools are also
improving data management.
Consider the vast repositories of
data that today’s businesses
generate and collect, as well as
the process of simply managing
that infrastructure—identifying
data, ingesting it, cataloging it,
and managing it over time.
Cloud computing solutions are
already using AI tools to help
with specific aspects of the data
process. In banking, for

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


20

example, even the smallest financial organization may need to monitor thousands of
transactions per day.

AI tools can help streamline the way data is ingested, updated, and managed, so
financial institutions can more easily offer accurate real-time data to clients. The
same process can also help flag fraudulent activity or identify other areas of risk.
Similar improvements can have a major impact on areas such as marketing,
customer service, and supply chain data management.

Getting More Done with AI–SaaS Integration


Artificial intelligence tools are also being rolled out as part of larger Software-as-a-
Service (SaaS) platforms to deliver more value. Increasingly, SaaS providers are
embedding AI tools into their larger software suites to offer greater functionality and
value to end users. Let’s explore one popular example: the customer relationship
management platform Salesforce and its Einstein AI tool. The value of a CRM is that
it captures a significant amount of customer data and makes it easier to track
customer relationships and personalize interactions. But the volume of data can be
overwhelming.

Salesforce introduced Einstein to help turn data into actionable insights businesses
can use to sell more, improve their sales strategies, and engage with customers. The
tools can help a business look for patterns in customer interactions, for example, to
help advice sales on what method–like phone, email, or an in-person meeting–is
more likely to drive a conversion. It can also be used to make “next step”
recommendations based on the buying signals the tool is perceiving.

Utilizing Dynamic Cloud Services


AI as a service is also changing the ways businesses rely on tools. Consider a cloud-
based retail module that makes it easier for brands to sell their products. The module
has a pricing feature that can automatically adjust the pricing on a given product to
account for issues such as demand, inventory levels, competitor sales, and market
trends. Sophisticated analysis that’s
based on modeling–pulling on deep
neural networks–can give businesses
much better command of their data,
with important real-time implications.
An AI-powered pricing module such as
this ensures that a company’s pricing
will always be optimized. It’s not just
about making better use of data; it’s
conducting that analysis and then
putting it into action without the need
for human intervention.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO


21

AI and cloud computing are transforming business at every level. From deeper
learning to near-compete automation of key processes, the potential is promising.
While there are some examples of this in the market now, a look at the landscape
suggests that this will only continue to grow in the years ahead. Begin to explore how
AI and cloud computing together could help you deliver better experiences, work
more efficiently, and capture the maximum value from the data and insights you
collect in the market.

MPSTMEBTech-202100109 DWAYNE SILVAPINTO

You might also like