MIT6 034F10 Final 2010
MIT6 034F10 Final 2010
MIT6 034F10 Final 2010
Circle your TA and principle recitation instructor so that we can more easily identify
with whom you have studied:
Indicate the approximate percent of the lectures, mega recitations, recitations, and
tutorials you have attended so that we can better gauge their correlation with quiz
and final performance and with attendance after OCW video goes on line. Your
answers have no effect on your grade.
There are 48 pages in this final examination, including this one. In addition, tear-
off sheets are provided at the end with duplicate drawings and data. As always,
open book, open notes, open just about everything.
1
Quiz 1, Problem 1, Rule Systems (50 points)
Kenny has designed two suits for the Soldier Design Competition, and he and Martin like to grapple in
the prototypes on Kresge Oval.
• Kendra insists the suits qualify as “deadly weapons” and Kenny should give them to her for
safekeeping.
• Kenny and Martin insist that they are examples of an “enhanced prosthesis” and that they
should be able to keep them
The TAs decide to use Rule-Based Systems to resolve their dispute.
Rules:
P0 IF (AND ('(?x) is a Crazy Physicist',
'(?x) is an Engineer')
THEN ('(?x) builds a Weaponized Suit') )
Assertions:
A0: (Kenny is a Roboticist)
2
Part A: Backward Chaining (30 points)
Make the following assumptions about backward chaining:
• The backward chainer tries to find a matching assertion in the list of assertions. If no matching
assertion is found, the backward chainer tries to find a rule with a matching consequent. In case
none are found, then the backward chainer assumes the hypothesis is false.
• The backward chainer never alters the list of assertions; it never derives the same result twice.
• Rules are tried in the order they appear.
• Antecedents are tried in the order they appear.
Write all the hypotheses the backward chainer looks for in the database in the order that the hypotheses
are looked for. The table has more lines than you need. We recommend that you use the space
provided on the next page to draw the goal tree that would be created by backward chaining from this
hypothesis. The goal tree will help us to assign partial credit in the event you have mistakes on the
list.
10
3
Draw Goal Tree Here for Partial Credit
4
Part B: Forward Chaining (20 points)
Let's say, instead, our assertions list looked like this:
B1 (4 points)
CIRCLE any and all rules that match in the first iteration of forward chaining
P0 P1 P2 P3 P4
B2 (4 points)
What assertion(s) are added or deleted from the database, as a consequence of this iteration?
B3 (4 points)
CIRCLE any and all rules that match in the second iteration of forward chaining
P0 P1 P2 P3 P4
B4 (4 points)
What assertion(s) are added or deleted from the database, as a consequence of this iteration?
B5 (4 points)
You take the same assertions as at the beginning of problem B, above, and re-order them:
Now, you start over, and run forward chaining from the beginning, until no new assertions are added to
or deleted from the database. Is Gleb's laptop a Deadly Weapon?
5
Quiz 1, Problem 2, Search (50 points)
As you get close to graduating MIT, you decide to do some career planning. You create a graph of your
options where the start node is M = MIT and your goal node is R = Retire, with a bunch of options in
between. Your graph includes edge distances that represent, roughly, the “cost of transition” between
these careers (don't think too hard about what this means). You also have heuristic node-to-goal
distances which represent your preconceptions about how many more years you have to work until you
retire. For example, you think it will take 25 years to go from MIT (M) to retirement (R), 30 years
from Grad School (B), but only 2 years from Entrepreneur (E).
6
A1 (3 points)
Assume you want to retire after doing the least number of different jobs. Of all the basic search
algorithms you learned about(that is, excluding branch and bound and A*) which one should you apply
to the graph in order to find a path, with the least search effort, that has the minimum
number of nodes from M to R?
Tree:
Path:
7
A3 Beam Search with w=2 (15 points)
Now you are interested in finding a path and the associated distance. Try a Beam Search with a width
w=2, with an extended list. As before, you are looking for a path from M to R. Use the
“preconceptions” heuristic distances indicated in parentheses at each node.
Tree:
Path, if any:
8
Part B: Advanced Search (25 points)
B1 Branch and Bound with Extended List (15 points)
Use Branch and Bound search with an Extended List to find a path from M to R, as well as the
extended node list. Use this space to draw the corresponding tree and show your work.
Tree:
Path:
9
B2 Thinking about Search (9 points)
Concisely explain why Branch and Bound with Extended List yields a different result than Beam
Search in this problem.
What can we say about the path found by Branch and Bound with Extended List? (We're looking for a
fairly strong statement here.)
Is there an algorithm that guarantees the same answer as Branch and Bound for the graph in this
problem, but can find the answer with fewer extended paths. If Yes, what is that algorithm? If No,
explain why not.
10
B3 Permissible Heuristics (6 points)
Suppose you are asked to find the shortest path from H to G in the graphs below. For both of the
graphs explain why the heuristic values shown are not valid for A*. Note the differences in the graphs
at nodes F and E.
Reason(s):
Reason(s):
11
Quiz 2, Problem 1, Games (50 points)
Part A: Basics(15 points)
A1 Plain Old Minimax(7 points)
Perform Minimax on this tree. Write the minimax value associated with each node in the
box below, next to its corresponding node letter.
A= B= C= D= E= F= G=
12
Part B: Alpha Beta (35 points)
Indicate in Next Move which of B or C you would go to from A and in Moving Towards which node in
the bottom row you are heading toward.
# of evaluations:______ List:______________________________________
13
B2: Preset Alpha-Beta (15 points)
Perform alpha-beta search, using initial values of alpha = 5 and beta = 8.
Indicate in Next Move which of B or C you would go to from A and in Moving Towards which node in
the bottom row you are heading toward.
# of evaluations:____ List:________________________________________________
14
Quiz 2, Problem 2,
After taking 6.034, your decide to offer your services to incoming freshman to help them set up their
course schedules. One particular freshman comes to you with four classes as well as an availability
schedule (grayed out boxes represent reserved times).
Time MWF TR
10
11
12
For easier bookkeeping you adopt the folowing naming convention (L = Lecture, R = Recitation, # =
course number):
15
Part A: Picking a Representation (8 points)
In order to fill in this schedule, you decide to set it up as a CSP using meeting times as variables and
courses as the values of in the domains. After filling in the domain table, this is what you see:
Variable Domain
10M L8, L21
11M L3, L8, L21
12M L3, L18, L8
1M
2M
3M L8
10T R3, L21
11T
12T R3
1T
2T R18
3T
What is wrong with the way that this problem is set up and why?
16
Part B: Applying Constraints (42 points)
You decide to switch to a new representation that uses the courses as variables and the times as values.
B1 (5 points)
The initial domains are given below. Cross out the values that are incompatible with Constraint (3).
Variable Domain
L3 11M 12M
L18 12M 1M
R18 11T 1T 2T
17
B2 (16 Points)
Run the DFS with forward checking only on your defined variables and the reduced domains you
found in Part B1 by applying Constraint(3).
L8
L3
R3
L21
(MWF)
L21
(TR)
L18
R18
18
B3 (5 Points)
How many times did the algorithm need to backtrack?
B4 (10 Points)
It occurs to you that you may be able to accelerate the process of finding a solution if you were to
perform domain reduction with propagation through singletons before running the DFS. Fill in your
updated domain table with the results of your computations.
Variable Domain
L8
L3
R3
L21 (MWF)
L21 (TR)
L18
R18
19
B5 (6 Points)
Run DFS with constraint checking only on your updated domain table:
•
L8
L3
R3
L21
(MWF)
L21
(TR)
L18
R18
20
Quiz 3, Problem 1
After receiving yet another “Dear sir or madam..” email, you decide to construct a spam filter.
A1 (10 points)
Draw the decision boundary for 1-nearest-neighbor on the above diagram of the given
training data. Use the center of the faces as the positions of the training data points.
21
A2 (8 points)
How will 1-nearest-neighbor classify an email with 200 words of which 9 are the word “sir”?
Plot this point on the graph as X? (2pts)
How will 3-nearest-neighbors classify an email with 600 words of which 7 are the word “madam”?
Plot this point on the graph as Y? (3pts)
How will 5-nearest-neighbors classify an email with 500 words of which 25 are the word “madam”?
Plot this on the graph as Z? (3pts)
A3 (7 points)
List which points yield errors when performing leave-one-out cross validation using 1-nearest
neighbor classification. (3 pts)
22
Part B: ID Trees (25 points)
Realizing nearest neighbors may not be the best tool for building a spam filter, you decide to try
another classifier you learned about in 6.034: Identification Trees.
B1 (8 points)
It appears that the over-use of the words “sir or madam” seems to be a strong hint at an email being
spam.
What is the minimum disorder and minimum-disorder decision boundary when you consider only the
dimension of “Sir or Madam”? You can use fractions, real numbers, and logarithms in your answer.
Approximate boundary:
Associated Disorder:
B2 (8 points)
Suppose we were given the following additional information about our training set:
One of those emails might be important so you don't want to risk missing a single one so you re-label
all Anne Hunter emails in the training data set to be good emails. You are to find the best axis-parallel
test given the revised labellings of good email and spam.
(NOTE: Use the unlabeled graphs in the tear-off sheets section if you need it to visualize the
modified data).
B2.1 Which emails does your new test missclassify on the modified data? (4pts)
23
B2.2 What is the disorder of your new test on the modified training data set?
Leave your answer as a function of fractions, real numbers, and logarithms. (4pts)
B3 (9 points)
Soon, you decide that your life goal is no longer to be a tool for a Harvard or Sloanie startup so you
decide that all emails from Anne Hunter should be marked as spam. (Again, use the
unlabeled graphs in the tear-off sheets if you need them).
Given the revised determination of what is good email and spam, draw the disorder minimizing
identification tree that represents your fully trained ID-tree spam filter. You may use any
horizontal and vertical classifiers in the dimensions of word count and “sir or madam” occurrences.
Ties should be broken in order of horizontal then vertical classifiers.
24
Quiz 3, Problem 2, Neural Nets (50 Points)
dP
A1. (7 pts) Using what you've learned from doing lab 5, write out the equation for expressed
dwCE
in terms of oi , d, and/or any weights and constants in the network. (oi refers to the output of any
neuron in the network.)
25
do E do i
A2. (7 pts) Write out the equation for expressed in terms of , oi , and/or any weights
dw XA dw XA
and constants in the network.
26
Part B: Letter Recognition (20 pts)
You propose to use a neural network to recognize characters from a scanned page. Letters are
represented binary images on a 1x1 unit grid. Assume that scaling and rotation are all done.
Because you want to start with something easy, you start with the problem of recognizing a character as
either possibly a T or definitely not a T. During training, each training sample consists of a random
point, (x, y), along with the desired 0 or 1 value: 1 if the underlying pixel at (x, y) is part of a T; 0 if
the pixel is part of a T's background.
You want to find the most compact network that will correctly handle the T problem, so you decide to
analytically work out the minimal network that will correctly classify a character as possibly a T or
definitely not a T.
Assume you decide to have the above network architecture, fill in the 7 missing weights in the table
that are required to accurately classify all points in the image for T. Your weights must be integer
weights or integer constraints on weights! Show your work for partial credit:
Answers:
WXA 0 WXC
WYA WYC 0
WA WC
WXB WAD
WYB WBD 2
WB 2 WCD 2
WD 3
27
Show work here for partial credit:
28
Part C: Expressive Power of Neural Nets (16 pts)
Circle all the functions that these networks are theoretically able to fully learn, and if your answer is No
indicate the lowest possible error rate. List of Functions:
X AND Y X=Y X = 0.5 AND Y = 0.5 X-Shape
How about when all initial weights are set to the same constant k?
29
Quiz 4, Problem 1,
You ultimately want to come up with a classifier of the form below. More formulas are provided in
tear off sheets.
h x : ∑ a i y i K x , xib≥0
Your TA, Yuan suggests that you use this kernel:
K
u , v =u1 v 1u 2 v 2∣u 2 – u 1∣∣v 2 – v 1∣
A1(5 pts): Note that Φ u is a vector that is a transform of u and the kernel is the dot product,
Φ
u ⋅Φ v . Determine the number of dimensions in Φ u , and then write the components of
u in terms of u 's x1 and x2 components, u 1 and u 2 . Explain why it is better to use
Φ
Φ
u rather than u .
Φ
u 's components:
Why better:
30
A2(10 pts): Fill in the unshaded portion in the following table of Kernel values.
A3 (10 pts): Now write out the full constraints equations you'll need to solve this SVM problem
They should be in terms of αi , b, and constants. (Hint: The four data points lie on their appropriate
gutters).
Constraint Equations
1
A4 (5 pts): Instead of solving the system of equations for alphas, suppose the alphas were magically
given to you as:
w
=
31
A5 (5 pts): What is the equation of the optimal SVM decision boundary using the answers and values
from A5?
h x =
A6 (5 pts): What is the width of the road defined by the optimal SVM decision boundary?
Width =
32
Part B: Kernel for k-Nearest Neighbors (10 pts)
A student noticed that Kernel methods can be used in other classification methods, such as k-nearest
neighbors. Specifically, one could classify an unknown point by summing up the fractional votes of
all data points, each weighted using a Radial Basis Kernel. The final output of an unknown point x
depends on the sum of weighed votes of data points xi in the training set, computed using the
function:
−∥x −xi∥
K x , xi =exp
s2
Negatively labeled data points contribute -1 times the kernel value and positively labeled training data
points contribute +1 times the kernel value.
You may find the graphs provided in the tear off sheets helpful.
B1 (6 pts)
Using this approach, as s increases to infinity, what k does this correspond to in k-NN?
B2 (4 pts):
State a reason why you would prefer to use the SVM with Radial Basis Kernel solution rather than the
method of Weighted Nearest Neighbors with a Gaussian Kernel.
33
Quiz 4, Problem 2, Boosting (50 points)
Kenny wants to recognize faces in images. He comes up with a few things that he thinks will probably
work well as weak classifiers and decides to create an amalgam classifier based on his training set.
Then, given an image, he should be able to classify it as a FACE or NOT FACE. When matched
against faces, the GRAY part of a classifier can be either WHITE or BLACK)
Classifiers:
Name Image Representation
Has Hair
Has Forehead
Has Eyes
Has Nose
Has Smile
F
Has Ear
34
Data:
4 FACE 8 FACE
Examples:
Here is how classifier E works. Note that gray means “don't care,” that is, it doesn't matter whether the
pixel in the same position is black or white.
+ = false + = true
35
Part A: Pre-Boosting (10 points)
36
Part B: Boosting (25 points)
Synthesize boosting using only classifiers A, B, C, and E. For ties, choose alphabetically.
w4
w5
w6
w7
w8
eA
eB
eC
eE
37
B2 Amalgam Classifier (5 points)
What is the overall classifier after 4 rounds of boosting?
Assume the boosting setup in Part B occurs for 100 rounds whether or not the overall classifier gets to
the point where all training samples are correctly classified. Place the four classifiers in the following
two bins
Now, Suppose you are working with just two classifiers, neither of which has a zero error rate.
Will boosting converge to an overall classifier that perfectly classifies all the training samples?
Yes No
Explain:
38
Quiz 5, Problem 1, Probability (50 points)
Consider the following inference net developed for students who graduate from MIT:
39
A1: What is the probability that a 6.034 student (W = true) had quality instruction (I = true) and
became successful in life (S = true), but did not have raw talent (T = false) yet was hardworking H =
true) and confident (C = true). Leave your answer unsimplified in terms of constants from the
probability tables. (6pts)
For A2-A3: Express your final answer in terms of expressions of probabilities that could be read
off the Bayes Net. You do not need to simplify down to constants defined in the Bayes Net tables.
You may use summations as necessary.
A2: What is probability of success in life (S = true) given that a student has high quality instruction
(I = true)? (6 pts)
A3: What is the probability a student is hardworking (H = true) , given that s/he was a 6.034 student
(W = true)? (10 pts)
40
Part B: The Naïve Crime Fighter (12 pts)
A murder has occurred in quiet town of South Park. Your have been provided the following table by
forensic experts:
Suspect Has a motive Location Murder Weapon Degree of Suspicion
Professor Chaos 0.4 Fair = 0.1 Kindness = 0.3 0.3
School = 0.7 Chili = 0.3
CW= 0.2 Moral Fiber = 0.2
Pure Evil = 0.2
Mint Berry Crunch 0.3 Fair = 0.4 Kindness = 0.4 0.1
School = 0.4 Chili = 0.1
CW = 0.3 Moral Fiber = 0.4
Pure Evil = 0.1
The Dark Lord 0.1 Fair = 0.3 Kindness = 0 0.4
Cthulu School = 0.3 Chili = 0.5
CW = 0.4 Moral Fiber = 0
Pure Evil = 0.5
Scott Tenorman 0.9 Fair = 0.8 Kindness = 0.2 0.2
School = 0.1 Chili = 0.5
CW = 0.1 Moral Fiber = 0.1
Pure Evil = 0.2
You have determined that the murder did not have a motive, the murder took place at the Fair, and the
murder weapon was a bowl of chili. You've decided to use what you learned about Naïve Bayes to
help you determine who committed the murder.
The murderer is most likely:
41
Part C: Coin Tosses (16 pts)
You decide to reexamine the coin toss problem using model selection.
You have 2 types of coins:
1) Fair: P(heads) = 0.5
2) All-heads: P(heads) = 1.0
You have 2 possible models to describe your observed sequences of coin types and coin tosses.
1) Model 1: You have both types of coins and you draw one of them at random, with equal
probability, and toss it exactly once. You repeat both the drawing and tossing 3 times total.
2) Model 2: You have both types of coins and you draw one of them at random, with equal
probability, and toss it exactly 3 times.
Finally, you have the following observed data.
Toss 1 Toss 2 Toss 3
H H T
The following questions use your knowledge of model selection to determine which is most likely.
You decide to use the following criterion to weigh models:
1 1
P M = where Z is a normalization constant.
Z ∣ parameters∣
|parameters| is defined as the number of cell entries in the CPTs of the Bayes Net representation.
C1. What is P(Model 1)? (3 pts) (Partial credit if you sketch the Models as Bayes Nets)
42
You've decided that the a priori model probability P(Model) to use should be uniform.
P(Model 1) = P(Model 2)
Under this assumption you decide to work out the most likely model, given the data, P(Model | Data).
C2 What is the most-likely model based on this fully observed data set: (10 pts)
P(Model 1 | Data)?
P(Model 2 | Data)?
Therefore the most likely model is: (circle one) Model 1 Model 2
43
Quiz 5, Problem 2, Near Miss (20 points)
Having missed many tutorials, lectures, and recitations, Stew is stuck on trying to figure out who are
the TAs in 6.034. You, who is more faithfully attending, knows who is who. Armed with your
knowledge about Near Miss concept learning. You decide to build a model that will help Stew figure
out who the TAs are.
The following table summarizes the training data about the current staff of 6.034. The Title attribute
is organized as a tree, with MEng and PhDs both a type of Student. Students and Faculty are grouped
under the type People-You-See-On-Campus
44
Fill in the table to build a model of a TA. Mark an attributes as "?" if the property has been dropped.
Kenny
Martin
Yuan
Bob
Fill in the following table to build a model of a Faculty Member (FM). You may assume that Patrick,
Randy, Bob, and Lisa are faculty members.
Randy
Bob
Mich
Lisa
45
What class(es) would match these people given your TA model and your FM model. If neither, write N
in the Class(es) column.
46
Quiz 5, Problem 3, Big Ideas (30 points)
Circle the best answer for each of the following question. There is no penalty for wrong answers, so it
pays to guess in the absence of knowledge.
4 The Genesis architecture (Winston's research focus) is best described as, in part, as
1. A commitment to the strong story hypothesis
2. Primarily motivated by a desire to build more intelligent commercial systems
3. A commitment to rule-like information processing
4. A belief that the human species became gradually smarter over 100s of thousands of years.
5. None of the above
47
5 A transition frame
1. Focuses on movement along a trajectory
2. Focuses on the movement from childlike to adult thinking
3. Focuses on a small vocabulary of state changes
4. Provides a mechanism for inheriting slots from abstract frames, such as the disaster frame
5. None of the above
6 Reification is
1. The attempt to develop a universal representation
2. The tendency to attribute magical powers to particular mechanisms
3. The process by which ways of thinking are determined by macro and micro cultures
4. The process of using perceptions to answer questions too hard for rule-based systems
5. None of the above
10 Goal trees
1. Enable rule-based systems to avoid logical inconsistency
2. Enable rule-based systems answer questions about behavior
3. Are central to the subsumption architecture's ability to operate without environment models
4. Are central to the subsumption architecture's ability to cope with unreliable hardware
5. None of the above
48
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.