0109 Dwayne Silvapinto
0109 Dwayne Silvapinto
0109 Dwayne Silvapinto
A REPORT
ON
Applications:-
Mobile platforms
Social media
Social media platforms have adopted facial recognition
capabilities to diversify their functionalities in order to
attract a wider user base amidst stiff competition from
different applications.
Face ID
Apple introduced Face ID on the flagship iPhone X as a biometric authentication
successor to the Touch ID, a fingerprint based system. Face ID has a facial
recognition sensor that consists of two parts: a "Romeo" module that projects more
than 30,000 infrared dots onto the user's face, and a "Juliet" module that reads the
pattern. The pattern is sent to a local "Secure Enclave" in the device's central
processing unit (CPU) to confirm a match with the phone owner's face. The facial
pattern is not accessible by Apple. The system will not work with eyes closed, in an
effort to prevent unauthorized access.
Commonwealth
The Australian Border Force and New
Zealand Customs Service have set up
an automated border processing system
called SmartGate that uses face
recognition, which compares the face of
the traveller with the data in the e-
passport microchip. All Canadian
international airports use facial
recognition as part of the Primary
Inspection Kiosk program that compares
a traveller face to their photo stored on
the ePassport. This program first came
to Vancouver International Airport in early 2017 and was rolled up to all remaining
international airports in 2018–2019. The Tocumen International Airport in Panama
operates an airport-wide surveillance system using hundreds of live face recognition
cameras to identify wanted individuals passing through the airport.
United States
The U.S. Department of State operates one of the largest face recognition systems
in the world with a database of 117 million American adults, with photos typically
drawn from driver's license photos. Although it is still far from completion, it is being
put to use in certain cities to give clues as to who was in the photo. The FBI uses the
photos as an investigative tool, not for positive identification.
China
Lots of public places in China are implemented with facial recognition equipment,
including railway stations, airports, tourist attractions, expos, and office buildings
One key advantage of a facial recognition system that it is able to person mass
identification as it does not require the cooperation of the test subject to work.
Properly designed systems installed in airports, multiplexes, and other public places
can identify individuals among the crowd, without passers-by even being aware of
the system.
However, as compared to other biometric techniques, face recognition may not be
most reliable and efficient. Quality measures are very important in facial recognition
systems as large degrees of variations are possible in face images. Factors such as
illumination, expression, pose and noise during face capture can affect the
performance of facial recognition systems. Among all biometric systems, facial
recognition has the highest false acceptance and rejection rates, thus questions
have been raised on the effectiveness of face recognition software in cases of
railway and airport security.
Weaknesses
Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute in 2008,
describes one obstacle related to the viewing angle of the face: "Face recognition
has been getting pretty good at full frontal faces and 20 degrees off, but as soon as
you go towards profile, there've been problems." Besides the pose variations, low-
resolution face images are also very hard to recognize. This is one of the main
obstacles of face recognition in surveillance systems.
Face recognition is less effective if facial expressions vary. A big smile can render
the system less effective. For instance: Canada, in 2009, allowed only neutral facial
expressions in passport photos.
Ineffectiveness
Critics of the technology complain that the London Borough of Newham scheme has,
as of 2004, never recognized a single criminal, despite several criminals in the
system's database living in the Borough and the system has been running for several
years. "Not once, as far as the police know, has Newham's automatic face
recognition system spotted a live target." This information seems to conflict with
claims that the system was credited with a 34% reduction in crime (hence why it was
rolled out to Birmingham also). However it can be explained by the notion that when
the public is regularly told that they are under constant video surveillance with
advanced face recognition technology, this fear alone can reduce the crime rate,
whether the face recognition system technically works or does not. This has been
the basis for several other face recognition based security systems, where the
technology itself does not work particularly well but the user's perception of the
technology does.
An experiment in 2002 by the local police department in Tampa, Florida, had
similarly disappointing results.
A system at Boston's Logan Airport was shut down in 2003 after failing to make any
matches during a two-year test period.
In 2014, Facebook stated that in a standardized two-option facial recognition test, its
online system scored 97.25% accuracy, compared to the human benchmark of
97.5%.
In 2018, a report by the civil liberties and rights campaigning organisation Big
Brother Watch revealed that two UK police forces, South Wales Police and
the Metropolitan Police, were using live facial recognition at public events and in
public spaces, in September 2019, South Wales Police use of facial recognition was
ruled lawful.
Systems are often advertised as having accuracy near 100%; this is misleading as
the studies often use much smaller sample sizes than would be necessary for large
scale applications. Because facial recognition is not completely accurate, it creates a
list of potential matches. A human operator must then look through these potential
matches and studies show the operators pick the correct match out of the list only
about half the time. This causes the issue of targeting the wrong suspect.
Deep learning
Deep learning is a class of machine learning algorithms that uses multiple layers to
progressively extract higher level features from the raw input. For example, in image
processing, lower layers may identify edges, while higher layers may identify the
concepts relevant to a human such as digits or letters or faces.
Deep learning architectures such as deep neural networks, deep belief
networks, recurrent neural networks and convolutional neural networks have been
applied to fields including computer vision, machine vision, speech
recognition, natural language processing, audio recognition, social network
filtering, machine translation, bioinformatics, drug design, medical image analysis,
material inspection and board game programs, where they have produced results
comparable to and in some cases surpassing human expert performance.
Artificial neural networks (ANNs) were inspired by information processing and
distributed communication nodes in biological systems. ANNs have various
differences from biological brains. Specifically, neural networks tend to be static and
symbolic, while the biological brain of most living organisms is dynamic (plastic) and
analog.
Applications
Large-scale automatic speech recognition is the first and most convincing successful
case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks that
involve multi-second intervals containing speech events separated by thousands of
discrete time steps, where one time step corresponds to about 10 ms. LSTM with
forget gates is competitive with traditional speech recognizers on certain tasks.
The initial success in speech recognition was based on small-scale recognition tasks
based on TIMIT. The data set contains 630
speakers from eight major dialects of American
English, where each speaker reads 10
sentences. Its small size lets many configurations
be tried. More importantly, the TIMIT task
concerns phone-sequence recognition, which,
unlike word-sequence recognition, allows weak
phone bigram language models. This lets the
strength of the acoustic modeling aspects of
speech recognition be more easily analyzed. The
error rates listed below, including these early
results and measured as percent phone error rates (PER), have been summarized
since 1991.
The debut of DNNs for speaker recognition in the late 1990s and speech recognition
around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight
major areas:
Image recognition
A common evaluation set for image classification is the MNIST database data set.
MNIST is composed of handwritten digits and includes 60,000 training examples and
10,000 test examples. As with TIMIT, its small size lets users test multiple
configurations. A comprehensive list of results on this set is available.
Deep learning-based image recognition has become "superhuman", producing more
accurate results than human contestants. This first occurred in 2011.
sentences at a time, rather than pieces. Google Translate supports over one
hundred languages.
Recommendation systems
Recommendation systems have used deep learning to extract meaningful features
for a latent factor model for content-based music and journal recommendations.
Bioinformatics
In medical informatics, deep learning was used to predict sleep quality based on
data from wearables and predictions of health complications from electronic health
record data.
Mobile advertising
Finding the appropriate mobile audience for mobile advertising is always challenging,
since many data points must be considered and analyzed before a target segment
can be created and used in ad serving by any ad server. Deep learning has been
used to interpret large, many-dimensioned advertising datasets
Image restoration
Deep learning has been successfully applied
to inverse problems such as denoising, super-
resolution, inpainting, and film colorization.
Military
The United States Department of Defense applied deep learning to train robots in
new tasks through observation.
While the application of artificial intelligence in heavy industry is still in its early
stages, applications are likely to include optimization of asset management and
operational performance, as well as identifying efficiencies and decreasing
downtime.
Potential benefits
AI-driven machines ensure an easier manufacturing process, along with many other
benefits, at each new stage of advancement. Technology creates new potential for
task automation while increasing the intelligence of human and machine
interaction. Some benefits of AI include directed automation, 24/7 production, safer
operational environments, and reduced operating costs.
Directed automation
AI and robots can execute actions repeatedly without any error, and design more
competent production models by building automation solutions. They are also
Environmental impacts
Self-driving cars are potentially beneficial to the environment. They can be
programmed to navigate the most efficient route and reduce idle time, which could
result in less fossil fuel consumption and greenhouse gas (GHG) emissions. The
same could be said for heavy machinery used in heavy industry. AI can accurately
follow a sequence of procedures repeatedly, whereas humans are prone to
occasional errors.
Additional benefits of AI
AI and industrial automation have advanced considerably over the years. There has
been an evolution of many new techniques and innovations, such as advances in
sensors and the increase of computing capabilities. AI helps machines gather and
Potential negatives
High cost
Though the cost has been decreasing in the past few years, individual development
expenditures can still be as high as $300,000 for basic AI. Small businesses with a
low capital investment may have difficulty generating the funds necessary to
leverage AI. For larger companies, the price of AI may be higher, depending on how
much AI is involved in the process.
AI decision-making
AI is only as intelligent as the individuals responsible for its initial programming. In
2014, an active shooter situation led to people calling Uber to escape the shooting
and surrounding area. Instead of recognizing this as a dangerous situation, the
algorithm Uber used saw a rise in demand and increased its prices
Environmental impacts
AI trained to act on environmental variables might have erroneous algorithms, which
can lead to potentially negative effects on the environment. Algorithms trained on
biased data will produce biased results
iteration to test which iterations work and which iterations fail. It is said to effectively
rent 50,000 computers [in the cloud] for an hour.
Applications
Self-driving vehicles
For now, a majority of companies are still running their pilot projects, striving to make
self-driving vehicles flawless and safe for passengers. As this technology evolves,
self-driving vehicles will gain mass confidence and become mainstream in the
consumer realm.
Traffic management
Another transportation problem that people face on a daily basis is traffic congestion.
AI is now set to solve this issue too.
Sensors and cameras embedded everywhere on roads collect the large voluminous
amount of traffic details. This data is then sent to the cloud, where analysis and
traffic pattern revelation will be done with big data analytics and an AI-powered
system. Valuable insights like traffic predictions can be gleaned from data
processing. Commuters can be provided with important details like traffic predictions,
accidents, or road blockages. Besides, people can be notified about the shortest
route to their destination, helping them travel without any hassles of traffic. This way,
AI can be used to not only reduce unwanted traffic but also improve road safety and
reduce wait times.
Delay predictions
Another burning problem faced by air transport today is flight delays. The estimated
costs due to flight delays are 39 billion dollars in the US, according to a study
conducted by researchers at the University of California, Berkeley. Along with
financial loss, flights delay negatively impact passenger’s flying experience. Negative
experience while flying can undermine a transport company’s value, which can result
in increased customer churn rate. To overcome these issues, AI comes to the air
transport industry's rescue here.
Leveraging data lake technology and computer vision, the industry can offer
exceptional service to passengers in cutting down passenger’s wait times and
enhancing their journey experience. As anything from bad weather to some technical
glitch can cause flights to be delayed, it is important to update flight details to
passengers in advance to get rid of unnecessary waiting times. With the help of
computer vision systems, continuous monitoring of the airplanes can be carried out,
eliminating the unplanned downtime. Besides, AI and machine learning components
will process real-time airplane data, historical records, and also the weather
information. On-the-spot computation will help in the revelation of hidden patterns,
which can help the air transport industry glean useful insights on other possibilities
that can cause flight delays and cancellations. This data can be forwarded to
passengers, which can help them plan their schedule accordingly.
Drone taxis
perform a host of delivery duties including pick up garbage, deliver packages and
food, and a numerous other services. All these services can be optimized through
advanced logistics for traffic flow.
Vehicles and transit schedules are “right-sized” so fleets are used effectively
and there are no more empty buses.
Fare payment is made electronically and only one payment is needed for each
whole trip.
Lower income and people with disability populations have access to all of
these services.
Applications
Let’s explore some of the most important and promising applications for AI and cloud
computing.
example, even the smallest financial organization may need to monitor thousands of
transactions per day.
AI tools can help streamline the way data is ingested, updated, and managed, so
financial institutions can more easily offer accurate real-time data to clients. The
same process can also help flag fraudulent activity or identify other areas of risk.
Similar improvements can have a major impact on areas such as marketing,
customer service, and supply chain data management.
Salesforce introduced Einstein to help turn data into actionable insights businesses
can use to sell more, improve their sales strategies, and engage with customers. The
tools can help a business look for patterns in customer interactions, for example, to
help advice sales on what method–like phone, email, or an in-person meeting–is
more likely to drive a conversion. It can also be used to make “next step”
recommendations based on the buying signals the tool is perceiving.
AI and cloud computing are transforming business at every level. From deeper
learning to near-compete automation of key processes, the potential is promising.
While there are some examples of this in the market now, a look at the landscape
suggests that this will only continue to grow in the years ahead. Begin to explore how
AI and cloud computing together could help you deliver better experiences, work
more efficiently, and capture the maximum value from the data and insights you
collect in the market.