Artificial Intelligence & Learning Computers: R.Kartheek B.Tech Iii (I.T) V.R.S & Y.R.N Engineering College, Chirala
Artificial Intelligence & Learning Computers: R.Kartheek B.Tech Iii (I.T) V.R.S & Y.R.N Engineering College, Chirala
Artificial Intelligence & Learning Computers: R.Kartheek B.Tech Iii (I.T) V.R.S & Y.R.N Engineering College, Chirala
Intelligence
&
Learning
Computers
By
R . K A RT H E E K
B.TECH III(I.T)
Artificial Intelligence
Starting from a modest but an over ambitious effort in the late 50’s, AI has grown
through its share of joys, disappointments and self-realizations. AI deals in science,
which deals with creation of machines, which can think like humans and behave
rationally. AI has a goal to automate every machine.
The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory definition of intelligence. Turing defined intelligent behavior as the
ability to achieve human-level performance in all cognitive tasks, sufficient to fool
an interrogator. Roughly speaking, the test he proposed is that the computer should
be interrogated by a human via a teletype, and passes the test if the interrogator
cannot tell if there is a computer or a human at the other end. His theorem (the
Church-Turing thesis) states that “Any effective procedure (or algorithm) can be
implemented through a Turing machine. “ Turing machines are abstract
mathematical entities that are composed of a tape, a read-write head, and a finite-
state machine. The head can either read or write symbols onto the tape, basically
an input-output device. The head can change its position, by either moving left or
right. The finite state machine is a memory/central processor that keeps track of
which of finitely many states it is currently in. By knowing which state it is currently
in, the finite state machine can determine which state to change to next, what
symbol to write onto the tape, and which direction the head should move.
No AI system can be called intelligent unless it learns & reasons like a human.
Reasoning derives new information from given ones.
One of the major challenges in this field is that a word can have more than one
meaning and this can result in ambiguity.
E.g.: Consider the following sentence
Spirit is strong but flesh is weak.
When an AI system was made to convert this sentence into Russian & then back to
English, following output was observed.
Wine is strong but meat is rotten.
Thus we come across two main obstacles. First, it is not easy to take informal
knowledge and state it in the formal terms required by logical notation, particularly
when the knowledge is less than 100% certain. Second, there is a big difference
between being able to solve a problem “in principle” and doing so in practice.
Even problems with just a few dozen facts can exhaust the computational resources
of any computer unless it has some guidance as to which reasoning steps to try
first.
A problem may or may not have a solution. This is why debugging is one of the
most challenging jobs faced by programmers today. As the rule goes, it is
impossible to create a program which can predict whether a given program is going
to terminate ultimately or not.
Development in this part was that algorithms were written using foundational
development of vocabulary and dictionary entries. Limitations of the algorithm were
found out. Later Formal Systems were developed which contained axioms, rules,
theorems and an orderly form of representation was developed.
For example, Chess is a formal system. We use rules in our everyday lives and
these rules accompany facts. Rules are used to construct an efficient expert system
having artificial intelligence. Important components of a Formal System are -
Backward Chaining i.e. trying to figure out the content by reading the sentence
backward and link each word to another, Explanation Generation i.e. generating an
explanation of whatever the system has understood, Inference Engine i.e.
submitting an inference or replying to the problem.
Reasoning
It is to use the stored information to answer questions and to draw new conclusions.
Reasoning means, drawing of conclusion from observations.
Learning
The most important requirement for an AI system is that it should learn from its
mistakes. The best way of teaching an AI system is by training & testing. Training
involves teaching of basic principles involved in doing a job. Testing process is the
real test of the knowledge acquired by the system wherein we give certain
examples & test the intelligence of the system. Examples can be positive or
negative. Negative examples are those which are ‘near miss’ of the positive
examples.
NLP helps us in
Hurdles
There are various hurdles in the field of NLP, especially speech processing which
result in increase in complexity of the system. We know that, no two people on
earth can have similar accent and pronunciations. This difference in style of
communicating results in ambiguity.
This is a part of natural language processing. The key feature of a machine having
artificial intelligence is its ability to communicate and interact with a human. The
only means for communication and interaction is through language. The language
being used by the machine should be understood by all humans. Example of such a
language is ENGLISH.
Consider a moving robot. There are two things, robots have to think and perform
while moving from one place to another:
One of the major problems is to find a collision free path amidst obstacles for a
robot from its starting position to its destination. To avoid collision two things can
be done viz 1) Reduce the object to be moved to a point form. 2) Give the obstacles
some extra space. This method is called Mikownski method of path planning.
Recognizing the object and matching it with the contents of the image library is
another method. It included corresponding matching and depth understanding,
edge detection using idea of zero crossing and stereo matching for distance
estimation. For analysis, it also considers robot as a point body.
Second major problem of path planning is to find the shortest path. The robot has to
calculate the Euclidean distance between the starting and the ending points. Then it
has to form algorithms for computing visibility graphs.
These algorithms have certain rules associated with.
Join lesser number of vertices to reduce complexity.
Divide each object into triangles.
Put a node in each triangle and join all of them.
Reduce the unnecessary areas because they might not contribute to the
shortest path.
Compute minimum link path and proceed.
This problem of deciding shortest path prevails. Robot might be a bulky and a huge
object so can’t be realized as a point. Secondly a robot is a mechanical body which
can’t turn instantly so it has to follow the procedure of wait-walk-wait-turn-wait-
walk---- which is very time-consuming and so not feasible. Therefore shortest
distance should have minimum number of turns associated with it.
For path planning the robot has to take a snap shot of the area it is going to cover.
This snap shot is processed in the above mentioned ways and then the robot
moves. But then the view changes with every step taken. So it has to do the
calculation at every step it takes which is very time consuming and tedious.
Experts decided to make the robot take the snap shot of the viewable distance and
decide the path. But this again becomes a problem because the device used for
viewing will have certain limitation of distance. Then these experts came to a
conclusion that the robot be given a fixed parameter i.e. take to take the snap shot
of a fixed distance say 10 meters, analyze it and decide the shortest path.
Neural-networks
Now this neuron can be trained to perform a particular logical operation like
AND.
The equivalent neural network simulation for AND function is given on the left
and its equation format on the right.
Whatever be the initial choice of the weights, the PTA will eventually converge
by finding the correct weight values provided the function being trained is
linearly separable.
This implies Perceptron Training Algorithm will absorb the threshold with
negative weight. Σ Wi Xi + (-1) θ ≥ 0
ABY
000
010
100
111
0 W1 + 0 W2 =0 (< θ)
0 W1 +1 W2 =0 (< θ)
1 W1 +0 W2 =0 (< θ)
1 W1 +1 W2 =1 (>θ)
0 W1 + 0 W2 =0 < θ
0 W1 +1 W2 =1 > θ
1 W1 +0 W2 =1 > θ
1 W1 +1 W2 =0 < θ
Conclusion
AI combined with various techniques in neural networks, fuzzy logic and natural
language processing will be able to revolutionize the future of machines and it
will transform the mechanical devices helping humans into intelligent rational
robots having emotions.
Expert systems like Mycin can help doctors in diagnosing patients. AI systems
can also help us in making airline enquiries and bookings using speech rather
than menus. Unmanned cars moving about in the city would be reality with
further advancements in AI systems. Also with the advent of VLSI techniques,
FPGA chips are being used in neural networks.
The future of AI in making intelligent machines looks incredible but some kind of
spiritual understanding will have to be inculcated into the machines so that their
decision making is governed by some principles and boundaries.
References
1. Department of Computer Science & Engineering – Indian Institute of
Technology, Bombay
2. AI - Rich & Knight
3. Principles of AI - N J Nelson
4. Neural Systems for Robotics – Omid Omidvar
5. http://www.elsevier.nl/locate/artint
6. http://library.thinkquest.org/18242/essays.shtml