Unit 1-Artificial Intelligence

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Unit-1 Artificial Intelligence

1.1 Introduction of AI

A branch of Computer Science named Artificial Intelligence (AI)pursues creating the


computers / machines as intelligent as human beings. John McCarthy the father of Artificial
Intelligence described AI as, “The science and engineering of making intelligent
machines, especially intelligent computer programs”. Artificial Intelligence (AI) is a
branch of Science which deals with helping machines find solutions to complex problems
in a more human-like fashion.
Artificial is defined in different approaches by various researchers during its evolution, such
as “Artificial Intelligence is the study of how to make computers do things which at the
moment, people do better.”
There are other possible definitions “like AI is a collection of hard problems which can be
solved by humans and other living things, but for which we don’t have good algorithms for
solving.” e. g., understanding spoken natural language, medical diagnosis, circuit design,
learning, self-adaptation, reasoning, chess playing, proving math theories, etc.

• Data: Data is defined as symbols that represent properties of objects events and
their environment.
• Information: Information is a message that contains relevant meaning, implication,
or input for decision and/or action.
• Knowledge: It is the (1) cognition or recognition (know-what), (2) capacity to
act(know-how), and(3)understanding (know-why)that resides or is contained within
the mind or in the brain.
• Intelligence: It requires ability to sense the environment, to make decisions, and to
control action.

1.1.1 Concept:
Artificial Intelligence is one of the emerging technologies that try to simulate human
reasoning in AI systems The art and science of bringing learning, adaptation and
selforganization to the machine is the art of Artificial Intelligence. Artificial Intelligence is
the ability of a computer program to learn and think.Artificial intelligence (AI) is an area
of computer science that emphasizes the creation of intelligent machines that work and
reacts like humans. AI is built on these three important concepts
Machine learning: When you command your smartphone to call someone, or when you
chat with a customer service chatbot, you are interacting with software that runs on AI. But
this type of software actually is limited to what it has been programmed to do. However,
we expect to soon have systems that can learn new tasks without humans having to guide
them. The idea is to give them a large amount of examples for any given chore, and they
should be able to process each one and learn how to do it by the end of the activity.
Deep learning: The machine learning example I provided above is limited by the fact that
humans still need to direct the AI’s development. In deep learning, the goal is for the
software to use what it has learned in one area to solve problems in other areas. For example,
a program that has learned how to distinguish images in a photograph might be able to use
this learning to seek out patterns in complex graphs.
Neural networks: These consist of computer programs that mimic the way the human brain
processes information. They specialize in clustering information and recognizing complex
patterns, giving computers the ability to use more sophisticated processes to analyze data.

1.1.2 Scope of AI:


The ultimate goal of artificial intelligence is to create computer programs that can solve
problems and achieve goals like humans would. There is scope in developing machines in
robotics, computer vision, language detection machine, game playing, expert systems,
speech recognition machine and much more.
The following factors characterize a career in artificial intelligence:
• Automation
• Robotics
• The use of sophisticated computer software
Individuals considering pursuing a career in this field require specific education based on the
foundations of math, technology, logic and engineering perspectives. Apart from these, good
communication skills (written and verbal) are imperative to convey how AI services and tools
will help when employed within industry settings.
1.1.3 Components of AI
The core components and constituents of AI are derived from the concept of logic,
cognition and computation; and the compound components, built-up through core
components are knowledge, reasoning, search, natural language processing, vision etc.

Level Core Compound Coarse components


Logic Induction Knowledge Knowledge based
Proposition Reasoning systems
Tautology Control
Model Logic Search Heuristic Search
Theorem Proving
Cognition Temporal
Learning Belief Multi Agent system
Adaptation Desire Co-operation
Self-organization Intention Co-ordination
AI Programming
Functional Memory Vision
Perception Utterance Natural Language
Speech Processing

The core entities are inseparable constituents of AI in that these concepts are fused at atomic
level. The concepts derived from logic are propositional logic, tautology, predicate
calculus, model and temporal logic. The concepts of cognitive science are of two types: one
is functional which includes learning, adaptation and self-organization, and the other is
memory and perception which are physical entities. The physical entities generate some
functions to make the compound components

The compound components are made of some combination of the logic and cognition
stream. These are knowledge, reasoning and control generated from constituents of logic
such as predicate calculus, induction and tautology and some from cognition (such as
learning and adaptation). Similarly, belief, desire and intention are models of mental states
that are predominantly based on cognitive components but less on logic. Vision, utterance
(vocal) and expression (written) are combined effect of memory and perceiving organs or
body sensors such as ear, eyes and vocal. The gross level contains the constituents at the
third level which are knowledge-based systems (KBS), heuristic search, automatic theorem
proving, multiagent systems, Al languages such as PROLOG and LISP, Natural language
processing (NLP). Speech processing and vision are based mainly on the principle of
pattern recognition. AI Dimension: The philosophy of Al in three-dimensional
representations consists in logic, cognition and computation in the x-direction, knowledge,
reasoning and interface in the ydirection. The x-y plane is the foundation of AI. The z-
direction consists of correlated systems of physical origin such as language, vision and
perception as shown in Figure.1.1
Fig. 1.2 Three dimensional model of AI

The First Dimension (Core)


The theory of logic, cognition and computation constitutes the fusion factors for the
formation of one of the foundations on coordinate x-axis. Philosophy from its very
inception of origin covered all the facts, directions and dimensions of human thinking
output. Aristotle's theory of syllogism, Descartes and Kant's critic of pure reasoning and
contribution of many other philosophers made knowledge-based on logic. It were Charles
Babbage and Boole who demonstrated the power of computation logic. Although the
modern philosophers such as Bertrand Russell correlated logic with mathematics but it was
Turing who developed the theory of computation for mechanization. In the 1960s, Marvin
Minsky pushed the logical formalism to integrate reasoning with knowledge.

Cognition:
Computers has became so popular in a short span of time due to the simple reason that they
adapted and projected the information processing paradigm (IPP) of human beings: sensing
organs as input, mechanical movement organs as output and the central nervous system
(CNS) in brain as control and computing devices, short-term and long-term memory were
not distinguished by computer scientists but, as a whole, it was in conjunction, termed
memory.

In further deepening level, the interaction of stimuli with the stored information to produce
new information requires the process of learning, adaptation and self-organization. These
functionalities in the information processing at a certain level of abstraction of brain
activities demonstrate a state of mind which exhibits certain specific behaviour to qualify
as intelligence. Computational models were developed and incorporated in machines which
mimicked the functionalities of human origin. The creation of such traits of human beings
in the computing devices and processes originated the concept of intelligence in machine
as virtual mechanism. These virtual machines were termed in due course of time artificial
intelligent machines.
Computation
The theory of computation developed by Turing-finite state automation—was a turning
point in mathematical model to logical computational. Chomsky's linguistic computational
theory generated a model for syntactic analysis through a regular grammar.

The Second Dimension


The second dimension contains knowledge, reasoning and interface which are the
components of knowledge-based system (KBS). Knowledge can be logical, it may be
processed as information which is subject to further computation. This means that any item
on the y-axis is correlated with any item on the x-axis to make the foundation of any item
on the z-axis. Knowledge and reasoning are difficult to prioritize, which occurs first:
whether knowledge is formed first and then reasoning is performed or as reasoning is
present, knowledge is formed. Interface is a means of communication between one domain
to another. Here, it connotes a different concept then the user's interface. The formation of
a permeable membrane or transparent solid structure between two domains of different
permittivity is termed interface. For example, in the industrial domain, the robot is an
interface. A robot exhibits all traits of human intelligence in its course of action to perform
mechanical work. In the KBS, the user's interface is an example of the interface between
computing machine and the user. Similarly, a program is an interface between the machine
and the user. The interface may be between human and human, i.e. experts in one domain
to experts in another domain. Human-tomachine is program and machine-to-machine is
hardware. These interfaces are in the context of computation and AI methodology.

The Third Dimension


The third dimension leads to the orbital or peripheral entities, which are built on the
foundation of x-y plane and revolve around these for development. The entities include an
information system. NLP, for example, is formed on the basis of the linguistic computation
theory of Chomsky and concepts of interface and knowledge on y-direction. Similarly,
vision has its basis on some computational model such as clustering, pattern recognition
computing models and image processing algorithms on the x-direction and knowledge of
the domain on the y-direction.

The third dimension is basically the application domain. Here, if the entities are near the
origin, more and more concepts are required from the x-y plane. For example, consider
information and automation, these are far away from entities on z-direction, but contain
some of the concepts of cognition and computation model respectively on x-direction and
concepts of knowledge (data), reasoning and interface on the y-direction.
In general, any quantity in any dimension is correlated with some entities on the other
dimension.
The implementation of the logical formalism was accelerated by the rapid growth in
electronic technology, in general and multiprocessing parallelism in particular.

1.1.4 Types of AI
Artificial Intelligence can be divided in various types, there are mainly two types of main
categorization which are based on capabilities and based on functionally of AI. Following
is flow diagram which explain the types of AI.

Fig 1.3 Types of


AI AI type-1: Based on Capabilities
1. Weak AI or Narrow AI:
• Narrow AI is a type of AI which is able to perform a dedicated task with intelligence.
The most common and currently available AI is Narrow AI in the world of Artificial
Intelligence.
• Narrow AI cannot perform beyond its field or limitations, as it is only trained for
one specific task. Hence it is also termed as weak AI. Narrow AI can fail in
unpredictable ways if it goes beyond its limits.
• Apple Siriis a good example of Narrow AI, but it operates with a limited pre-defined
range of functions.
• IBM's Watson supercomputer also comes under Narrow AI, as it uses an Expert
system approach combined with Machine learning and natural language processing.
• Some Examples of Narrow AI are playing chess, purchasing suggestions on
ecommerce site, self-driving cars, speech recognition, and image recognition.

2. General AI:
• General AI is a type of intelligence which could perform any intellectual task with
efficiency like a human.
• The idea behind the general AI to make such a system which could be smarter and
think like a human by its own.
• Currently, there is no such system exist which could come under general AI and can
perform any task as perfect as a human.
• The worldwide researchers are now focused on developing machines with General
AI.
• As systems with general AI are still under research, and it will take lots of efforts
and time to develop such systems.

3. Super AI:
• Super AI is a level of Intelligence of Systems at which machines could surpass
human intelligence, and can perform any task better than human with cognitive
properties. It is an outcome of general AI.
• Some key characteristics of strong AI include capability include the ability to think,
to reason, solve the puzzle, make judgments, plan, learn, and communicate by its
own.
• Super AI is still a hypothetical concept of Artificial Intelligence. Development of
such systems in real is still world changing task.

Artificial Intelligence type-2: Based on functionality


1. Reactive Machines
• Purely reactive machines are the most basic types of Artificial Intelligence.
• Such AI systems do not store memories or past experiences for future actions.
• These machines only focus on current scenarios and react on it as per possible best
action.
• IBM's Deep Blue system is an example of reactive machines.
• Google's AlphaGo is also an example of reactive machines.

2. Limited Memory
• Limited memory machines can store past experiences or some data for a short period
of time.
• These machines can use stored data for a limited time period only.
• Self-driving cars are one of the best examples of Limited Memory systems. These
cars can store recent speed of nearby cars, the distance of other cars, speed limit,
and other information to navigate the road.

3. Theory of Mind
• Theory of Mind AI should understand the human emotions, people, beliefs, and be
able to interact socially like humans.
• This type of AI machines are still not developed, but researchers are making lots of
efforts and improvement for developing such AI machines.

4. Self-Awareness
• Self-awareness AI is the future of Artificial Intelligence. These machines will be
super intelligent, and will have their own consciousness, sentiments, and self-
awareness.
• These machines will be smarter than human mind.
• Self-Awareness AI does not exist in reality still and it is a hypothetical concept.

1.1.5 Application of AI
AI has been dominant in various fields such as −
• Gaming: AI plays crucial role in strategic games such as chess, poker, tic-tac-toe,
etc., where machine can think of large number of possible positions based on
heuristic knowledge.
• Natural Language Processing: It is possible to interact with the computer that
understands natural language spoken by humans.
• Expert Systems: There are some applications which integrate machine, software,
and special information to impart reasoning and advising. They provide explanation
and advice to the users.
• Vision Systems: These systems understand, interpret, and comprehend visual input
on the computer. For example, o A spying aeroplane takes photographs, which are
used to figure out spatial information or map of the areas.
o Doctors use clinical expert system to diagnose the patient. o Police use
computer software that can recognize the face of criminal with the stored portrait
made by forensic artist.
• Speech Recognition: Some intelligent systems are capable of hearing and
comprehending the language in terms of sentences and their meanings while a
human talks to it. It can handle different accents, slang words, noise in the
background, change in human’s noise due to cold, etc.
• Handwriting Recognition: The handwriting recognition software reads the text
written on paper by a pen or on screen by a stylus. It can recognize the shapes of the
letters and convert it into editable text.
• Intelligent Robots: Robots are able to perform the tasks given by a human. They
have sensors to detect physical data from the real world such as light, heat,
temperature, movement, sound, bump, and pressure. They have efficient processors,
multiple sensors and huge memory, to exhibit intelligence. In addition, they are
capable of learning from their mistakes and they can adapt to the new environment.

1.2 Data Visualization

Data visualization is the graphical representation of information and data.


OR The art of presenting your data and information as graphs, charts, or maps is known
as Data Visualization.
Data visualization is the process of using visual elements like charts, graphs, or maps to
represent data. It translates complex, high-volume, or numerical data into a visual
representation that is easier to process.
Modern businesses typically process large volumes of data from various data sources, such as
the following:
• Internal and external websites
• Smart devices
• Internal data collection systems
• Social media

But raw data can be hard to comprehend and use. Hence, data scientists prepare and present
data in the right context.

They give it a visual form so that decision-makers can identify the relationships between data
and detect hidden patterns or trends.

1.2.1 Data Types in Data Visualization


There are many types of data visualization. The most common are scatter plots, line graphs,
pie charts, bar charts, heat maps, area charts, choropleth maps and histograms.

1. Column Chart : They are a straightforward, time-tested method of comparing several


collections of data. A column chart may be used to track data sets across time.

2. Line Graph: A line graph is used to show trends, development, or changes through time.
As a result, it functions best when your data collection is continuous as opposed to having many
beginnings and ends.

3. Pie Chart: In a pie chart, a single, constant number is represented by the several categories
that make up its parts. You will portray numerical quantities in percentages when you employ
one. All of the various components should sum up to a hundred percent when totaled.

4. Bar Chart: To compare data along two axes, use bar charts. A visual representation of the
categories or subjects being measured is shown on one of the axes, which is numerical.

5. Heat Maps: A data visualization method that uses colors to denote values; great for seeing
trends in huge datasets.

6. Scatter Plot: The correlation between variables is examined using a scatter plot. At the point
where the data's two values overlap, the data are represented on the graph as dots.

7. Bubble Chart: A variant of the scatter plot where the size and colour of the bubbles, which
represent the data points, provide extra information, are used to depict the data points as dots.
8. Funnel Chart: To illustrate a sequential process from top to bottom, a funnel chart's
principal purpose is to represent it graphically. As the process flows down, the amount
generally decreases, making the data set at the top of the process greater than the bottom.

9. Radar Chart: Radar charts are a sort of data visualization that aids in the analysis of objects
or categories in light of a variety of attributes. The radar chart consists of a circle with
concentric rings, and the data are shown as dots on the chart. The shape is then formed by
connecting the dots. Each thing or group has a shape.

10. Tree Chart: An alternative to a table for precise numerical data is a tree chart, often known
as a tree diagram. The basic goal of a tree chart is to represent data as pieces of a larger whole
within a category.

11. Flow Chart: One extremely adaptable method of data display is the flowchart. Use mind
maps for brainstorming, flowcharts to depict a process graphically and hierarchical data of
objects or people.

12. Gauge: A gauge is a percentage visualization. There are a few uses for the half-doughnut-
like form. To display a percentage figure with an arrow pointing to it is the simplest use. If you
have a small quantity of data to work with, this is a fantastic option.

13. Gantt Chart: Horizontal bar graphs are the basis for the Gantt chart; however, they differ
significantly from them. A rectangle that extends from left to right stands for each item on the
chart. Depending on how long each activity takes to accomplish, each one varies in size.

14. Venn Diagram: A Venn diagram is a data visualization that compares two or more objects
by emphasizing their similarities. The most typical Venn diagram design consists of two
overlapping circles.

15. Histogram: While a histogram and a bar graph are similar, they use distinct charting
systems. The ideal sort of data visualization for frequency-based analysis of data ranges is a
histogram.

1.2.2 Scales map of Data Values in aesthetics


Scales are marks on a visualisation that tell you the range of values of data that is presented.
All data visualizations map data values into quantifiable aesthetic features, such as shape,
size, colour, position, orientation, font type, font size, and many others.

1.2.3 Use of Coordinate system in data visualization


The most widely used coordinate system for data visualization is the 2d Cartesian
coordinate system, where each location is uniquely specified by an x and a y value.
A coordinate system is a method for identifying the location of a point on the earth. Most
coordinate systems use two numbers, a coordinate, to identify the location of a point.

1.2.4 Use of colors to represent data values


Color is important in data visualization because it allows you to highlight certain pieces of
information and promote information recall.
Using different colors can separate and define different data points within a visualization so
that viewers can easily distinguish significant differences or similarities in values.
One common use for color in data visualization is using red for negative/loss and green for
positive/gains.
If you were to have a green arrow pointing down to show loss and a red arrow pointing up
to show gains.

1.2.5 Representing- Amounts, Distribution and Proportions

1.3 Data Storytelling

1.3.1 Introduction
Data storytelling is the ability to effectively communicate insights from a dataset using
narratives and visualizations. It can be used to put data insights into context for and inspire
action from your audience.

There are three key components to data storytelling:

1. Data: Thorough analysis of accurate, complete data serves as the foundation of


your data story. Analyzing data using descriptive, diagnostic, predictive,
and prescriptive analysis can enable you to understand its full picture.

2. Narrative: A verbal or written narrative, also called a storyline, is used to


communicate insights gleaned from data, the context surrounding it, and actions
you recommend and aim to inspire in your audience.

3. Visualizations: Visual representations of your data and narrative can be useful for
communicating its story clearly and memorably. These can be charts, graphs,
diagrams, pictures, or videos.

1.3.2 Ineffectiveness of Graphical representation of Data

Ineffective or misleading data visualizations can obscure insights and compromise the
integrity of your narrative.
This can lead to misinterpretation, a loss of credibility, and incorrect decision-making.

1.3.3 Explanatory Analysis


o Who
o What
o How

1.4 Concept of machine learning and deep learning


1.4.1 Machine Learning:
• Machine learning is a branch of science that deals with programming the systems in
such a way that they automatically learn and improve with experience. Here,
learning means recognizing and understanding the input data and making wise
decisions based on the supplied data.
• It is very difficult to cater to all the decisions based on all possible inputs. To tackle
this problem, algorithms are developed. These algorithms build knowledge from
specific data and past experience with the principles of statistics, probability theory,
logic, combinatorial optimization, search, reinforcement learning, and control
theory. The developed algorithms form the basis of various applications such as:
• Vision processing
• Language processing
• Forecasting (e.g., stock market trends)
• Pattern recognition
• Games
• Data mining
• Expert systems
• Robotics
Machine learning is a vast area and it is quite beyond the scope of this tutorial to cover all
its features. There are several ways to implement machine learning techniques, however
the most commonly used ones are supervised and unsupervised learning.

Supervised Learning: Supervised learning deals with learning a function from available
training data. A supervised learning algorithm analyzes the training data and produces an
inferred function, which can be used for mapping new examples. Common examples of
supervised learning include:
• classifying e-mails as spam,
• labeling webpages based on their content, and
• voice recognition.
There are many supervised learning algorithms such as neural networks, Support Vector
Machines (SVMs), and Naive Bayes classifiers. Mahout implements Naive Bayes
classifier.

Unsupervised Learning: Unsupervised learning makes sense of unlabeled data without


having any predefined dataset for its training. Unsupervised learning is an extremely
powerful tool for analyzing available data and look for patterns and trends. It is most
commonly used for clustering similar input into logical groups. Common approaches to
unsupervised learning include:
• k-means
• self-organizing maps, and
• hierarchical clustering

1.4.2 Deep Learning


Deep learning is a subfield of machine learning where concerned algorithms are inspired
by the structure and function of the brain called artificial neural networks.
All the value today of deep learning is through supervised learning or learning from labelled
data and algorithms.
Each algorithm in deep learning goes through the same process. It includes a hierarchy of
nonlinear transformation of input that can be used to generate a statistical model as output.
Consider the following steps that define the Machine Learning
process Identifies relevant data sets and prepares them for
analysis.
• Chooses the type of algorithm to use
• Builds an analytical model based on the algorithm used.
• Trains the model on test data sets, revising it as needed.
• Runs the model to generate test scores.

Deep learning has evolved hand-in-hand with the digital era, which has brought about an
explosion of data in all forms and from every region of the world. This data, known simply
as big data, is drawn from sources like social media, internet search engines, e-commerce
platforms, and online cinemas, among others. This enormous amount of data is readily
accessible and can be shared through fintech applications like cloud computing.
However, the data, which normally is unstructured, is so vast that it could take decades for
humans to comprehend it and extract relevant information. Companies realize the incredible
potential that can result from unraveling this wealth of information and are increasingly
adapting to AI systems for automated support.

Applications of Machine Learning and Deep Learning


• Computer vision which is used for facial recognition and attendance mark through
fingerprints or vehicle identification through number plate.
• Information Retrieval from search engines like text search for image search.
• Automated email marketing with specified target identification.
• Medical diagnosis of cancer tumors or anomaly identification of any chronic
disease.
• Natural language processing for applications like photo tagging. The best example
to explain this scenario is used in Facebook.
• Online Advertising.

You might also like