Lecture 1 PDF
Lecture 1 PDF
Lecture 1 PDF
● Regression
○ Error measure such as ‘mean squared error’
Experience (E)
● Supervised Learning
○ Labelled data – (Data, target value)
○ Target value could be category/class labels, real value,
real vector, etc.
○ Classification, Regression
● Unsupervised Learning
○ Only data, no labels
○ Dimensionality Reduction, ICA, Clustering
● Reinforcement Learning
○ No examples, but a reward function
○ Payoff based on actions
An Incomplete History of Learning
● Turing Test (1950)
○ Machines do very poorly ● Neural Networks (1980’s)
● Rosenblatt’s Perceptron (1960’s) ○ Connectionism
○ Kick started the mathematical ○ Back-propagation [LeCun, `86]
analysis of the learning process ○ CNNs, RNNs
○ Key idea behind Support Vector
Machines (SVMs) and Neural ● SVMs (1990’s)
Networks ○ Margin Maximization
● Construction of Fundamentals of ○ Kernel Methods to handle
Learning Theory (1960-70’s) non-linearity
○ Focus on generalization
capability of learning machines ● Deep Learning (>2006)
■ Performance on unseen ○ Hinton, Bengio, LeCun at
data forefront
○ Regularization for ill-posed ○ Abstract Representations
problems
■ e.g., linear equations for ● (>2012) Craziness!!
ill-conditioned matrices
Recent Progress
● Google Search
● Computer Vision / Image Recognition
○ ImageNet
○ Convolutional Neural Networks
● Autonomous driving
● Speech Recognition
● Voice assistants
○ Apple’s Siri, Microsoft’s Cortana, Amazon’s Echo
● Language Translation
○ Google Translate
○ Unsupervised Translation
● Game Playing / Deep Reinforcement Learning
○ AlphaGo
ML vs DL
Next Class
● Learning Problems and the Empirical Risk Minimization
Framework
● Loss Functions for Classification and Regression
● Evaluation Metrics for Classification
Welcome Quiz
(Linear Algebra, Probability & Statistics, Vector
Calculus)