Artificial Intelligent

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

Faculty of Engineering

Department of Civil and Structural Engineering



KKKA 6424
INTELIGENT URBAN TRAFFIC CONTROL SYSTEM

Ir. Dr. Riza Atiq Abdullah O.K. Rahmat

ASSIGNMENT 6
ARTIFICIAL INTELLIGENT

DONE BY
MOHANAD JAAFAR TALIB P71085





Introduction to Articial Neural Networks
What is an Articial Neural Network ?
- It is a computational system inspired by the
Structure
Processing Method
Learning Ability
of a biological brain
- Characteristics of Articial Neural Networks
A large number of very simple processing neuron-lik e processing
elements
A large number of weighted connections between the elements
Distributed representation of knowledge over the connections
Knowledge is acquired by network through a learning process


Why Articial Neural Networks ?
- Massive Parallelism
- Distributed representation
- Learning ability
- Generalization ablity
- Fault tolerance
Elements of Articial Neural Networks
- Processing Units
- Topology
- Learning Algorithm
Processing Units


Node input: neti =j wijIi
Node Output: Oi = f (neti)


Activation Function











- An example



Topology

Learning
- Learn the connection weights from a set of training examples
- Different network architectures required different learning algorithms
Supervised Learning
The network is provided with a correct answer (output) for every
input pattern
Weights are determined to allow the network to produce answers
as close as possible to the known correct answers
The back-propagation algorithm belongs into this category
Unsupervised Learning
Does not require a correct answer associated with each input pattern
in the training set
Explores the underlying structure in the data, or correlations
between patterns in the data, and organizes patterns into categories
from these correlations
The Kohonen algorithm belongs into this category
Hybrid Learning
Comnines supervised and unsupervised learning
Part of the weights are determined through supervised learning
and the others are obtained through aunsupervised learning


Computational Properties
A single hidden layer feed-forward network with arbitrary sigmoid
hidden layer activation functions can approximate arbitrarily well an
arbitrary mapping from one finite dimensional space to another

Practical Issues
- Generalization vs Memorization
How to choose the network size (free parameters)
How many training examples
When to stop training
Applications
- Pattern Classification
- Clustering/Categorization
- Function approximation
- Prediction/Forecasting
- Optimization
- Content-addressable Memory
- Control
Two Successful Applications
- Zipcode Recognition



- Text to voice translation (NeTtalk)

The scope of this teaching package is to make a brief induction to Artificial Neural
Networks (ANNs) for people who have no previous knowledge of them. We first
make a brief introduction to models of networks, for then describing in general
terms ANNs. As an application, we explain the backpropagation algorithm, since it
is widely used and many other algorithms are derived from it. The user should
know algebra and the handling of functions and vectors. Differential calculus is
recommendable, but not necessary. The contents of this package should be
understood by people with high school education. It would be useful for people
who are just curious about what are ANNs, or for people who want to become
familiar with them, so when they study them more fully, they will already have
clear notions of ANNs. Also, people who only want to apply the backpropagation
algorithm without a detailed and formal explanation of it will find this material
useful. This work should not be seen as Nets for dummies, but of course it is not
a treatise. Much of the formality is skipped for the sake of simplicity. Detailed
explanations and demonstrations can be found in the referred readings. The
included exercises complement the understanding of the theory. The on-line
resources are highly recommended for extending this brief induction.
Networks
One efficient way of solving complex problems is following the lemma divide
and conquer. A complex system may be decomposed into simpler elements, in
order to be able to understand it. Also simple elements may be gathered to produce
a complex system (Bar Yam, 1997). Networks are one approach for achieving this.
There are a large number of different types of networks, but they all are
characterized by the following components: a set of nodes, and connections
between nodes. The nodes can be seen as computational units. They receive inputs,
and process them to obtain an output. This processing might be very simple (such
as summing the inputs), or quite complex (a node might contain another
network...) The connections determine the information flow between nodes. They
can be unidirectional, when the information flows only in one sense, and
bidirectional, when the information flows in either sense. The interactions of nodes
though the connections lead to a global behaviour of the network, which cannot be
observed in the elements of the network. This global behaviour is said to be
emergent. This means that the abilities of the network supercede the ones of its
elements, making networks a very powerful tool.

Artificial neural networks
One type of network sees the nodes as artificial neurons. These are called
artificial neural networks (ANNs). An artificial neuron is a computational model
inspired in the natural neurons. Natural neurons receive signals through synapses
located on the dendrites or membrane of the neuron. When the signals received are
strong enough (surpass a certain threshold), the neuron is activated and emits a
signal though the axon. This signal might be sent to another synapse, and might
activate other neurons.
The complexity of real neurons is highly abstracted when modelling artificial
neurons. These basically consist of inputs (like synapses), which are multiplied by
weights (strength of the respective signals), and then computed by a mathematical
function which determines the activation of the neuron. Another function (which
may be the identity) computes the output of the artificial neuron (sometimes in
dependance of a certain threshold). ANNs combine artificial neurons in order to
process information.
The higher a weight of an artificial neuron is, the stronger the input which is
multiplied by it will be. Weights can also be negative, so we can say that the signal
is inhibited by the negative weight. Depending on the weights, the computation of
the neuron will be different. By adjusting the weights of an artificial neuron we can
obtain the output we want for specific inputs. But when we have an ANN of
hundreds or thousands of neurons, it would be quite complicated to find by hand
all the necessary weights. But we can find algorithms which can adjust the weights
of the ANN in order to obtain the desired output from the network. This process of
adjusting the weights is called learning or training.
The number of types of ANNs and their uses is very high. Since the first neural
model by McCulloch and Pitts (1943) there have been developed hundreds of
different models considered as ANNs. The differences in them might be the
functions, the accepted values, the topology, the learning algorithms, etc. Also
there are many hybrid models where each neuron has more properties than the ones
we are reviewing here. Because of matters of space, we will present only an ANN
which learns using the backpropagation algorithm (Rumelhart and McClelland,
1986) for learning the appropriate weights, since it is one of the most common
models used in ANNs, and many others are based on it.
Exercise
This exercise is to become familiar with artificial neural network concepts. Build
a network consisting of four artificial neurons. Two neurons receive inputs to the
network, and the other two give outputs from the network.
There are weights assigned with each arrow, which represent information flow.
These weights are multiplied by the values which go through each arrow, to give
more or less strength to the signal which they transmit. The neurons of this
network just sum their inputs. Since the input neurons have only one input, their
output will be the input they received multiplied by a weight. What happens if this
weight is negative? What happens if this weight is zero?
The neurons on the output layer receive the outputs of both input neurons,
multiplied by their respective weights, and sum them. They give an output which is
multiplied by another weight.
Now, set all the weights to be equal to one. This means that the information will
flow unaffected. Compute the outputs of the network for the following inputs:
(1,1), (1,0), (0,1),(0,0), (-1,1), (-1,-1)
Good. Now, choose weights among 0.5, 0, and -0.5, and set them randomly along
the network. Compute the outputs for the same inputs as above. Change some
weights and see how the behavior of the networks changes. Which weights are
more critical (if you change those weights, the outputs will change more
dramatically)?
Now, suppose we want a network like the one we are working with, such that the
outputs should be the inputs in inverse order (e.g. (0.3,0.7)->(0.7,0.3)).
That was an easy one! Another easy network would be one where the outputs
should be the double of the inputs.
Now, lets set thresholds to the neurons. This is, if the previous output of the
neuron (weighted sum of the inputs) is greater than the threshold of the neuron, the
output of the neuron will be one, and zero otherwise. Set thresholds to a couple of
the already developed networks, and see how this affects their behavior.
Now, suppose we have a network which will receive for inputs only zeroes
and/or ones. Adjust the weights and thresholds of the neurons so that the output of
the first output neuron will be the conjunction (AND) of the network inputs (one
when both inputs are one, zero otherwise), and the output of the second output
neuron will be the disjunction (OR) of the network inputs (zero in both inputs are
zeroes, one otherwise). You can see that there is more than one network which will
give the requested result.
Now, perhaps it is not so complicated to adjust the weights of such a small
network, but also the capabilities of this are quite limited. If we need a network of
hundreds of neurons, how would you adjust the weights to obtain the desired
output? There are methods for finding them, and now we will expose the most
common one.
The Back propagation Algorithm
The back propagation algorithm (Rumelhart and McClelland, 1986) is used in
layered feed-forward ANNs. This means that the artificial neurons are organized in
layers, and send their signals forward, and then the errors are propagated
backwards. The network receives inputs by neurons in the input layer, and the
output of the network is given by the neurons on an output layer. There may be one
or more intermediate hidden layers. The back propagation algorithm uses
supervised learning, which means that we provide the algorithm with examples of
the inputs and outputs we want the network to compute, and then the error
(difference between actual and expected results) is calculated. The idea of the
backpropagation algorithm is to reduce this error, until the ANN learns the training
data. The training begins with random weights, and the goal is to adjust them so
that the error will be minimal.
Genetic Algorithms
Genetic Algorithms were invented to mimic some of the processes observed in
natural evolution. Many people, biologists included, are astonished that life at the
level of complexity that we observe could have evolved in the relatively short time
suggested by the fossil record. The idea with GA is to use this power of evolution to
solve optimization problems. The father of the original Genetic Algorithm was
John Holland who invented it in the early 1970's

What is Genetic Algorithms?
Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the
evolutionary ideas of natural selection and genetics. As such they represent an
intelligent exploitation of a random search used to solve optimization problems.
Although randomized, GAs are by no means random, instead they exploit
historical information to direct the search into the region of better performance
within the search space. The basic techniques of the GAs are designed to simulate
processes in natural systems necessary for evolution, especially those follow the
principles first laid down by Charles Darwin of "survival of the fittest.". Since in
nature, competition among individuals for scanty resources results in the fittest
individuals dominating over the weaker ones.
Science arises from the very human desire to understand and control the world.
Over the course of history, we humans have gradually built up a grand edifice of
knowledge that enables us to predict, to varying extents, the weather, the motions
of the planets, solar and lunar eclipses, the courses of diseases, the rise and fall of
economic growth, the stages of language development in children, and a vast
panorama of other natural, social, and cultural phenomena. More recently we have
even come to understand some fundamental limits to our abilities to predict. Over
the eons we have developed increasingly complex means to control many aspects
of our lives and our interactions with nature, and we have learned, often the hard
way, the extent to which other aspects are uncontrollable.

The advent of electronic computers has arguably been the most revolutionary
development in the history of science and technology. This ongoing revolution is
profoundly increasing our ability to predict and control nature in ways that were
barely conceived of even half a century ago. For many, the crowning achievements
of this revolution will be the creationin the form of computer programsof new
species of intelligent beings, and even of new forms of life.

The goals of creating artificial intelligence and artificial life can be traced back to
the very beginnings of the computer age. The earliest computer scientistsAlan
Turing, John von Neumann, Norbert Wiener, and otherswere motivated in large
part by visions of imbuing computer programs with intelligence, with the lifelike
ability to selfreplicate, and with the adaptive capability to learn and to control
their environments. These early pioneers of computer science were as much
interested in biology and psychology as in electronics, and they looked to natural
systems as guiding metaphors for how to achieve their visions. It should be no
surprise, then, that from the earliest days computers were applied not only to
calculating missile trajectories and deciphering military codes but also to modeling
the brain, mimicking human learning, and simulating biological evolution. These
biologically motivated computing activities have waxed and waned over the years,
but since the early 1980s they have all undergone a resurgence in the computation
research community. The first has grown into the field of neural networks, the
second into machine learning, and the third into what is now called "evolutionary
computation," of which genetic algorithms are the most prominent example.



In the 1950s and the 1960s several computer scientists independently studied
evolutionary systems with the idea that evolution could be used as an optimization
tool for engineering problems. The idea in all these systems was to evolve a
population of candidate solutions to a given problem, using operators inspired by
natural genetic variation and natural selection.

In the 1960s, Rechenberg (1965, 1973) introduced "evolution strategies"
(Evolutions strategie in the original German), a method he used to optimize
realvalued parameters for devices such as airfoils. This idea was further
developed by Schwefel (1975, 1977). The field of evolution strategies has
remained an active area of research, mostly developing independently from the
field of genetic algorithms (although recently the two communities have begun to
interact). (For a short review of evolution strategies, see Back, Hoffmeister, and
Schwefel 1991.) Fogel, Owens, and Walsh (1966) developed "evolutionary
programming," a technique in which candidate solutions to given tasks were
represented as finitestate machines, which were evolved by randomly mutating
their statetransition diagrams and selecting the fittest. A somewhat broader
formulation of evolutionary programming also remains an area of active research
(see, for example, Fogel and Atmar 1993). Together, evolution strategies,
evolutionary programming, and genetic algorithms form the backbone of the field
of evolutionary computation.

Several other people working in the 1950s and the 1960s developed
evolutioninspired algorithms for
optimization and machine learning. Box (1957), Friedman (1959), Bledsoe (1961),
Bremermann (1962), and Reed, Toombs, and Baricelli (1967) all worked in this
area, though their work has been given little or none of the kind of attention or
followup that evolution strategies, evolutionary programming, and genetic
algorithms have seen. In addition, a number of evolutionary biologists used
computers to simulate evolution for the purpose of controlled experiments (see,
e.g., Baricelli 1957, 1962; Fraser 1957 a,b; Martin and Cockerham 1960).
Evolutionary computation was definitely in the air in the formative days of the
electronic computer.

Genetic algorithms (GAs) were invented by John Holland in the 1960s and were
developed by Holland and his students and colleagues at the University of
Michigan in the 1960s and the 1970s. In contrast with
evolution strategies and evolutionary programming, Holland's original goal was
not to design algorithms to solve specific problems, but rather to formally study the
phenomenon of adaptation as it occurs in nature and to develop ways in which the
mechanisms of natural adaptation might be imported into computer systems.
Holland's 1975 book Adaptation in Natural and Artificial Systems presented the
genetic algorithm as an abstraction of biological evolution and gave a theoretical
framework for adaptation under the GA. Holland's GA is a method for moving
from one population of "chromosomes" (e.g., strings of ones and zeros, or "bits")
to a new population by using a kind of "natural selection" together with the
geneticsinspired operators of crossover, mutation, and inversion. Each
chromosome consists of "genes" (e.g., bits), each gene being an instance of a
particular "allele" (e.g., 0 or 1). The selection operator chooses those chromosomes
in the population that will be allowed to reproduce, and on average the fitter
chromosomes produce more offspring than the less fit ones. Crossover exchanges
subparts of two chromosomes, roughly mimicking biological recombination
between two singlechromosome ("haploid") organisms; mutation randomly
changes the allele values of some locations in the chromosome; and inversion
reverses the order of a contiguous section of the chromosome, thus rearranging the
order in which genes are arrayed. (Here, as in most of the GA literature,
"crossover" and "recombination" will mean the same thing.)
Holland's introduction of a populationbased algorithm with crossover,
inversion, and mutation was a major innovation. (Rechenberg's evolution strategies
started with a "population" of two individuals, one parent and one offspring, the
offspring being a mutated version of the parent; manyindividual populations and
crossover were not incorporated until later. Fogel, Owens, and Walsh's
evolutionary programming likewise used only mutation to provide variation.)
Moreover, Holland was the first to attempt to put computational evolution on a
firm theoretical footing (see Holland 1975). Until recently this theoretical
foundation, based on the notion of "schemas," was the basis of almost all
subsequent theoretical work on genetic algorithms In the last several years there
has been widespread interaction among researchers studying various evolutionary
computation methods, and the boundaries between GAs, evolution strategies,
evolutionary programming, and other evolutionary approaches have broken down
to some extent. Today, researchers often use the term "genetic algorithm" to
describe something very far from Holland's original conception. In this book I
adopt this flexibility. Most of the projects I will describe here were referred to by
their originators as GAs; some were not, but they all have enough of a "family
resemblance" that I include them under the rubric of genetic algorithms.


Expert system

In artificial intelligence, an expert system is a computer system that emulates the
decision-making ability of a human expert. Expert systems are designed to solve
complex problems by reasoning about knowledge, represented primarily as ifthen
rules rather than through conventional procedural code. The first expert systems
were created in the 1970s and then proliferated in the 1980s. Expert systems were
among the first truly successful forms of AI software.

Edward Feigenbaum in a 1977 paper said that the key insight of early expert
systems was that "intelligent systems derive their power from the knowledge they
possess rather than from the specific formalisms and inference schemes they use"
(as paraphrased by Hayes-Roth, et al.) Although, in retrospect, this seems a rather
straightforward insight, it was a significant step forward at the time. Until then,
research had been focused on attempts to develop very general-purpose problem
solvers such as those described by Newell and Simon.

Expert systems were introduced by the Stanford Heuristic Programming Project
led by Feigenbaum, who is sometimes referred to as the "father of expert systems".
The Stanford researchers tried to identify domains where expertise was highly
valued and complex, such as diagnosing infectious diseases (Mycin) and
identifying unknown organic molecules (Dendral).

Research on expert systems was also active in France. In the US the focus tended
to be on rule-based systems, first on systems hard coded on top of LISP
programming environments and then on expert system shells developed by vendors
such as Intellicorp. In France research focused more on systems developed
in Prolog. The advantage of expert system shells was that they were somewhat
easier for non-programmers to use. The advantage of Prolog environments was that
they weren't focused only on IF-THEN rules. Prolog environments provided a
much fuller realization of a complete First Order Logic environment.

In the 1980s, expert systems proliferated. Universities offered expert system
courses and two thirds of the Fortune 1000 companies applied the technology in
daily business activities. Interest was international with the Fifth Generation
Computer Systems project in Japan and increased research funding in Europe.

Software architecture
An expert system is an example of a knowledge-based system. Expert systems
were the first commercial systems to use a knowledge-based architecture. A
knowledge-based system is essentially composed of two sub-systems:
the knowledge base and the inference engine.

The inference engine is an automated reasoning system that evaluates the current
state of the knowledge-base, applies relevant rules, and then asserts new
knowledge into the knowledge base. The inference engine may also include
capabilities for explanation, so that it can explain to a user the chain of reasoning
used to arrive at a particular conclusion by tracing back over the firing of rules that
resulted in the assertion

There are primarily two modes for an inference engine: forward
chaining and backward chaining. The different approaches are dictated by whether
the inference engine is being driven by the antecedent (left hand side) or the
consequent (right hand side) of the rule. In forward chaining an antecedent fires
and asserts the consequent. For example, consider the following rule:
R1: Man(x) => Mortal(x)
A simple example of forward chaining would be to assert Man(Socrates) to the
system and then trigger the inference engine. It would match R1 and assert
Mortal(Socrates) into the knowledge base.
Backward chaining is a bit less straight forward. In backward chaining the system
looks at possible conclusions and works backward to see if they might be true. So
if the system was trying to determine if Mortal(Socrates) is true it would find R1
and query the knowledge base to see if Man(Socrates) is true. One of the early
innovations of expert systems shells was to integrate inference engines with a user
interface. This could be especially powerful with backward chaining. If the system
needs to know a particular fact but doesn't it can simply generate an input screen
and ask the user if the information is known. So in this example, it could use R1 to
ask the user if Socrates was a Man and then use that new information accordingly.

The use of rules to explicitly represent knowledge also enabled explanation
capabilities. In the simple example above if the system had used R1 to assert that
Socrates was Mortal and a user wished to understand why Socrates was mortal
they could query the system and the system would look back at the rules which
fired to cause the assertion and present those rules to the user as an explanation. In
English if the user asked "Why is Socrates Mortal?" the system would reply
"Because all men are mortal and Socrates is a man". A significant area for research
was the generation of explanations from the knowledge base in natural English
rather than simply by showing the more formal but less intuitive rules.

As Expert Systems evolved many new techniques were incorporated into various
types of inference engines. Some of the most important of these were:
Truth Maintenance. Truth maintenance systems record the dependencies in a
knowledge-base so that when facts are altered dependent knowledge can be
altered accordingly. For example, if the system learns that Socrates is no longer
known to be a man it will revoke the assertion that Socrates is mortal.
Hypothetical Reasoning. In hypothetical reasoning, the knowledge base can be
divided up into many possible views, aka worlds. This allows the inference
engine to explore multiple possibilities in parallel. In this simple example, the
system may want to explore the consequences of both assertions, what will be
true if Socrates is a Man and what will be true if he is not?
Fuzzy Logic. One of the first extensions of simply using rules to represent
knowledge was also to associate a probability with each rule. So, not to assert
that Socrates is mortal but to assert Socrates may be mortal with some
probability value. Simple probabilities were extended in some systems with
sophisticated mechanisms for uncertain reasoning and combination of
probabilities.
Ontology Classification. With the addition of object classes to the knowledge
base a new type of reasoning was possible. Rather than reason simply about the
values of the objects the system could also reason about the structure of the
objects as well. In this simple example Man can represent an object class and
R1 can be redefined as a rule that defines the class of all men. These types of
special purpose inference engines are known as classifiers. Although they were
not highly used in expert systems classifiers are very powerful for unstructured
volatile domains and are a key technology for the Internet and the
emerging Semantic Web.
Advantages
The goal of knowledge-based systems is to make the critical information required
for the system to work explicit rather than implicit.
[29]
In a traditional computer
program the logic is embedded in code that can typically only be reviewed by an
IT specialist. With an expert system the goal was to specify the rules in a format
that was intuitive and easily understood, reviewed, and even edited by domain
experts rather than IT experts. The benefits of this explicit knowledge
representation were rapid development and ease of maintenance.
Ease of maintenance is the most obvious benefit. This was achieved in two ways.
First, by removing the need to write conventional code many of the normal
problems that can be caused by even small changes to a system could be avoided
with expert systems. Essentially, the logical flow of the program (at least at the
highest level) was simply a given for the system, simply invoke the inference
engine. This also was a reason for the second benefit: rapid prototyping. With an
expert system shell it was possible to enter a few rules and have a prototype
developed in days rather than the months or year typically associated with complex
IT projects.
A claim for expert system shells that was often made was that they removed the
need for trained programmers and that experts could develop systems themselves.
In reality this was seldom if ever true. While the rules for an expert system were
more comprehensible than typical computer code they still had a formal syntax
where a misplaced comma or other character could cause havoc as with any other
computer language. In addition as expert systems moved from prototypes in the lab
to deployment in the business world issues of integration and maintenance became
far more critical. Inevitably demands to integrate with and take advantage of large
legacy databases and systems arose. To accomplish this integration required the
same skills as any other type of system.

Disadvantages
The most common disadvantage cited for expert systems in the academic literature
is the knowledge engineering problem. Obtaining the time of domain experts for
any software application is always difficult but for expert systems it was especially
difficult because the experts were by definition highly valued and in constant
demand by the organization. As a result of this problem a great deal of research
effort in the later years of expert systems was focused on tools for knowledge
acquisition, to help automate the process of designing, debugging, and maintaining
rules defined by experts. However, when looking at the life-cycle of expert
systems in actual use other problems seem at least as critical as knowledge
acquisition. These problems with expert systems were essentially the same
problems as any other large system: integration, access to large databases, and
performance.
Performance was especially problematic for early expert systems as they were built
using tools that featured interpreted rather than compiled code such as Lisp.
Interpreting provides an extremely powerful development environment but with a
cost that it is virtually impossible to obtain the levels of efficiency of the fastest
compiled languages of the time such as C. System and database integration were
difficult for early expert systems due to the fact that the tools were mostly in
languages and platforms that were not familiar to nor welcomed in most corporate
IT environments. Programming languages such as Lisp and Prolog and hardware
platforms such as Lisp Machines and personal computers. As a result a great deal
of effort in the later stages of expert system tool development were focused on
integration with legacy environments such as COBOL, integration with large
database systems, and porting to more standard platforms. These issues were
resolved primarily by the client-server paradigm shift as PCs were gradually
accepted in the IT world as a legitimate platform for serious business system
development and as affordable minicomputer servers provided the processing
power needed for AI applications.


Fuzzy logic

Fuzzy logic for most of us: Its not as fuzzy as you might think and has been
working quietly behind the scenes for years. Fuzzy logic is a rule-based system
that can rely on the practical experience of an operator, particularly useful to
capture experienced operator knowledge. Heres what you need to know.
Fuzzy logic is not as fuzzy as you might think and has been working quietly
behind the scenes for more than 20 years in more places than most admit. Fuzzy
logic is a rule-based system that can rely on the practical experience of an operator,
particularly useful to capture experienced operator knowledge. Fuzzy logic is a
form of artificial intelligence software; therefore, it would be considered a subset
of AI. Since it is performing a form of decision making, it can be loosely included
as a member of the AI software toolkit. Heres what you need to know to consider
using fuzzy logic to help solve your next application. Its not as fuzzy as you might
think.

Fuzzy logic has been around since the mid 1960s; however, it was not until the
70s that a practical application was demonstrated. Since that time the Japanese
have traditionally been the largest producer of fuzzy logic applications. Fuzzy
logic has appeared in cameras, washing machines, and even in stock trading
applications. In the last decade the United States has started to catch on to the use
of fuzzy logic. There are many applications that use fuzzy logic, but fail to tell us
of its use. Probably the biggest reason is that the term fuzzy logic may have a
negative connotation.

Fuzzy logic can be applied to non-engineering applications as illustrated in the
stock trading application. It has also been used in medical diagnosis systems and in
handwriting recognition applications. In fact a fuzzy logic system can be applied to
almost any type of system that has inputs and outputs.

Fuzzy logic systems are well suited to nonlinear systems and systems that have
multiple inputs and multiple outputs. Any reasonable number of inputs and outputs
can be accommodated. Fuzzy logic also works well when the system cannot be
modeled easily by conventional means.

Many engineers are afraid to dive into fuzzy logic due to a lack of understanding.
Fuzzy logic does not have to be hard to understand, even though the math behind it
can be intimidating, especially to those of us who have not been in a math class for
many years.

Binary logic is either 1 or 0. Fuzzy logic is a continuum of values between 0 and
1. This may also be thought of as 0% to 100%. An example is the variable
YOUNG. We may say that age 5 is 100% YOUNG, 18 is 50% YOUNG, and 30 is
0% YOUNG. In the binary world everything below 18 would be 100% YOUNG,
and everything above would be 0% YOUNG.

The design of a fuzzy logic system starts with a set of membership functions for
each input and a set for each output. A set of rules is then applied to the
membership functions to yield a crisp output value.

Fuzzy logic is a form of many-valued logic; it deals with reasoning that is
approximate rather than fixed and exact. Compared to traditional binary sets
(where variables may take on true or false values), fuzzy logic variables may have
a truth value that ranges in degree between 0 and 1. Fuzzy logic has been extended
to handle the concept of partial truth, where the truth value may range between
completely true and completely false. Furthermore, when linguistic variables are
used, these degrees may be managed by specific functions. Irrationality can be
described in terms of what is known as the fuzzjective.
The term "fuzzy logic" was introduced with the 1965 proposal of fuzzy set
theory by Lotfi A. Zadeh. Fuzzy logic has been applied to many fields,
from control theory to artificial intelligence. Fuzzy logics had, however, been
studied since the 1920s, as infinite-valued logics - notably
by ukasiewicz and Tarski.

Overview
Classical logic only permits propositions having a value of truth or falsity. The
notion of whether 1+1=2 is an absolute, immutable, mathematical truth. However,
there exist certain propositions with variable answers, such as asking various
people to identify a color. The notion of truth doesn't fall by the wayside, but rather
a means of representing and reasoning over partial knowledge is afforded, by
aggregating all possible outcomes into a dimensional spectrum.
Both degrees of truth and probabilities range between 0 and 1 and hence may
seem similar at first. For example, let a 100 ml glass contain 30 ml of water. Then
we may consider two concepts: empty and full. The meaning of each of them can
be represented by a certain fuzzy set. Then one might define the glass as being
0.7 empty and 0.3 full. Note that the concept of emptiness would be subjective and
thus would depend on the observer or designer. Another designer might equally
well design a set membership function where the glass would be considered full for
all values down to 50 ml. It is essential to realize that fuzzy logic uses truth degrees
as a mathematical model of the vagueness phenomenon while probability is a
mathematical model of ignorance.

You might also like