0% found this document useful (0 votes)
152 views

Discrete vs. Continuous: The Calculus of Images

This lecture discusses concepts from calculus that can be applied to digital images. It begins by introducing images as discrete functions defined over a rectangular domain. While images are discrete, continuous concepts like derivatives can be approximated using discretization techniques like finite differences. The lecture covers calculating partial derivatives of images to detect edges, approximating derivatives as filters, and using the gradient vector and its norm to enhance edges. Boundary conditions are also discussed for applying finite difference schemes to images.

Uploaded by

Jgg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views

Discrete vs. Continuous: The Calculus of Images

This lecture discusses concepts from calculus that can be applied to digital images. It begins by introducing images as discrete functions defined over a rectangular domain. While images are discrete, continuous concepts like derivatives can be approximated using discretization techniques like finite differences. The lecture covers calculating partial derivatives of images to detect edges, approximating derivatives as filters, and using the gradient vector and its norm to enhance edges. Boundary conditions are also discussed for applying finite difference schemes to images.

Uploaded by

Jgg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Lecture 5: The Calculus of Images

Discrete vs. Continuous


! Images are discrete objects. We are only
Lecture 5: given data at pixel locations.
The Calculus of Images ! But many important geometry concepts
are defined for continuous functions.
" Derivativesand gradients
Math 490
" Area and volume
Prof. Todd Wittman " Curvature
The Citadel " Arc Length

Images as Functions Discretization


! We can think of an image as a function of two variables
! Discretization is the process of approximating a
f(x,y) defined on some rectangular domain Ω. mathematical concept defined for continuous
! We know the value of the function at integer locations,
objects (like functions) into an equivalent
e.g. f(2,3). concept for discrete objects (like images).
! But what is the value at non-integer locations, like
f(2.2, 3.4)? Continuous mathematical concept Example: Riemann Sum
3
Discretization / 0 1 21
! Our data exists at discrete integer pixel locations. But 4
we can pretend that the values exist in between the 8
pixels. Discrete approximation
5 0 16 ∆1
! This allows us to discuss continuous concepts like 69:
derivatives and integrals on images.

Finite Differences Boundary Conditions


! So for our signal f(x), we can approximate the derivative
! Let's start with a 1D signal f(x). 0; at each point by looking at the difference with the next
! Recall the definition of the derivative. point.
0 1 + ℎ − 0(1) ! Suppose our vector has length n: n = length(f);
0; = lim
=→? ℎ ! What happens when we reach the last point?
! But we can't let h go to zero. The smallest h can f_x(1) = f(2) - f(1);
become is 1, because our data points are 1 pixel apart.
f_x(2) = f(3) - f(2);
f_x(3) = f(4) - f(3);
! So we approximate the derivative with h=1:
0; ≈ 0 1 + 1 − 0(1) ....
! This type of approximation of the derivative is called a f_x(n-1) = f(n) - f(n-1)
finite difference. f_x(n) = ???
! Why does this code not work? f_x = f(2:n) - f(1:n);

1
Lecture 5: The Calculus of Images

Boundary Conditions Derivative of a Sine Wave


! Neumann boundary conditions assumes an unknown x = 0:0.1:2*pi;
derivative at the end of the data is zero.
f_x(1) = f(2) - f(1); f = sin(x);
f_x(2) = f(3) - f(2); n = length(f);
f_x(3) = f(4) - f(3);
f_x = f([2:n,n]) - f;
....
f_x(n) = f(n) - f(n) = 0; subplot(121); plot(x, f);
! We can code this elegantly in one line of Matlab code subplot(122); plot(x, f_x);
using the colon operator. Just repeat the last entry.
f_x = f([2:n,n]) - f(1:n);
OR f_x = f([2:n,n]) - f;

Finite Difference Schemes Partial Derivatives


! There are several ways we could approximate the
derivative fx. Different approaches are called schemes. ! An image is 2-dimensional, so we have a
! Forward Difference: h=1 derivative in the x-direction and a
0; ≈ G;H0 = 0 1 + 1 − 0(1) derivative in the y-direction.
f_x = f([2:n,n]) - f;
! Let f(x,y) be a grayscale image.
! Backward Difference: h=-1
0; ≈ G;I0 = 0 1 − 0(1 − 1) ! The forward differences would give:
f_x = f - f([1,1:n-1]); [m,n] = size(f);
0; ≈ G;H0 = 0 1 + 1, M − 0(1, M)
! Central Difference: h=2
J ;H: IJ(;I:)
f_x = f(:,[2:n,n]) - f;
0; ≈ G;? 0 =
K f_y = f([2:m,m],:) - f; 0N ≈ GNH0 = 0 1, M + 1 − 0(1, M)
f_x = ( f([2:n,n]) - f([1,1:n-1]) ) / 2;

Partial Derivatives Finite Differences as Filters


! You can think of a finite difference as a 3x3 linear filter
! The derivative in x-direction O; locates vertical edges. applied to an image.
! The derivative in y-direction ON locates horizontal edges.
! Forward Difference: h=1
A = imread('cameraman.tif'); 0; ≈ G;H0 = 0 1 + 1, M − 0(1, M)
A = double(A);
[m,n] = size(A); ! Backward Difference: h=-1
A_x = A(:,[2:n,n]) - A; 0; ≈ G;I0 = 0 1, M − 0(1 − 1, M)
A_y = A([2:m,m],:) - A;
subplot(121); imagesc(A_x); ! Central Difference: h=2
subplot(122); imagesc(A_y); J ;H:,N IJ(;I:,N)
0; ≈ G;? 0 =
K

2
Lecture 5: The Calculus of Images

Other Derivative Approximations The Gradient


! The Prewitt filter uses more pixels so it is less sensitive ! The gradient is a 2D vector listing the values of the
to noise. But it de-emphasizes values near the center. partial derivatives at each point:
−1 0 1 1 1 1 RO = O; , ON
O; ≈ −1 0 1 ON ≈ 0 0 0
! The gradient always points in the direction of maximum
−1 0 1 −1 −1 −1
positive change (dark to light).
250
! The Sobel filter gives more emphasis to changes around
the center pixel. 100
−1 0 1 1 2 1
O; ≈ −2 0 2 ON ≈ 0 0 0
−1 0 1 −1 −2 −1
! There are many other finite difference schemes, like
upwind and minmod. Each has pros and cons.

Norm of Gradient Edge Detector


! The norm (magnitude) of the gradient vector tells us the
total amount of change at each pixel. ! We use the norm of the gradient to detect edges.
RO = O;K + ONK

! The norm of the gradient is large at edges of the image


P = imread('pout.tif');
and zero in flat (single color) regions. P = double(P);
250 [m,n] = size(P);
100 Px = P(:,[2:n,n]) - P;
Py = P([2:m,m],:) - P;
N = sqrt(Px.^2 +Py.^2);
imagesc(N);
Note the .^ for pointwise exponents.

Second Derivatives 3 Ways to Code uxx


! To approximate a second derivative, we 1.) Forward then Backward Difference.
O;; ≈ G;I(G;HO)
take a finite difference of a finite
Dplus = u(:,[2:n,n]) - u;
difference. u_xx = Dplus - Dplus(:,[1,1:n-1]);
O;; ≈ G;I (G;H O) 2.) Backward then Forward Difference.
! Note we use one forward and one O;; ≈ G;H(G;IO)
backward difference. Dminus = u - u(:,[1,1:n-1]);
u_xx = Dminus(:,[2:n,n]) - Dminus;
! Why not just use 2 forward differences? 3.) Write out the formula.
O;; ≈ O 1 + 1, M − 2O 1, M + O(1 − 1, M)
u_xx = u(:,[2:n,n]) - 2*u + u(:,[1,1:n-1]);

3
Lecture 5: The Calculus of Images

Second Derivatives The Laplacian


[m,n] = size(u); ! The Laplacian is the sum of the second derivatives:
% Second derivative in x: u_xx ∆O = O;; + ONN
u_xx = u(:,[2:n,n]) - 2*u + u(:,[1,1:n-1]); ! Recall we implemented a Laplacian filter.
% Second derivative in y: u_yy ! We use the Laplacian to locate edges. Subtracting the
u_yy = u([2:m,m],:) - 2*u + u([1,1:m-1],:); Laplacian sharpens the images.
% Diagonal derivative u_xy
u_xy = ( u([2:m,m],[2:n,n])
+ u([1,1:m-1],[1,1:n-1])
- u([1,1:m-1],[2:n,n]) + =
- u([2:m,m],[1,1:n-1]) ) / 4;

Double Integrals Measuring Noise


! The double integral of u(x,y) over the domain Ω is
! We measured the noise levels last week
S O 1, M 21 2M using SNR and RMSE.
T
! The discrete approximation is simply a double ! But these statistics require an ideal noise-
summation of all values of u(x,y): free image, which in general we don't
d = sum(sum(u)); have.
! Sometimes we get lazy and vectorize the variables as
1U = (1, M) so we can write a single integral: ! We would like a way to judge how much
noise an image contains without requiring
/ O 1U 21U
T a magical perfect image.
! But don't let this fool you, it's still a double sum!

Total Variation 1D TV
! The Total Variation (TV) energy of an image u(x,y) is ! The 1-dimensional version for a function f(x) on [a,b] is
found by adding up the norm of the gradient (Rudin- 3

Osher-Fatemi, 1989). VW = / 0 X 1 21
4
! Ex Calculate the TV value of a sine wave on [0,2Y].
VW O = S RO 21 2M
T
! We interpret TV as the total amount of jumps (variation)
in the image.
! Or if we vectorize 1U = (1, M), we can write as

VW O = / RO 21U
T

4
Lecture 5: The Calculus of Images

2D TV The Co-Area Formula


! Co-Area Formula: The TV norm is equal to the
! Calculate TV energy for the image below. perimeter of each shape times the jump at the perimeter.
! Assume the image is 100x200 pixels and ! Suppose we have a circle of radius 5 pixels on a dark
gray background. Calculate the TV energy value.
the gray square is 30x30 pixels.
Ω Ω
250 20
100
150

Noise on TV Measuring Noise


! TV does not tell us exactly how much noise is in
! Again suppose we have a circle of radius the image, but if we have a version of an image
5 pixels on a dark gray background. with a high TV value then it is probably noisy.
! Let's add one noise pixel with a value 250.
Ω
20
150

Your Very Own TV Curvature


! Ex Write a function that calculates the TV ! The curvature of a surface u(x,y)
energy value of a grayscale image. measures how quickly the unit tangent
vector to the surface is changing:
VW O = S RO 21 2M RO
]=R∙
T RO
! We'll have to discretize this energy, so
really we'll be computing an approximation ! Ex Write a function that computes the
of the TV energy. curvature matrix of a grayscale image.
Watch out for division by zero!

You might also like