Image Compression Using SVD PDF
Image Compression Using SVD PDF
Image Compression Using SVD PDF
Introduction
Data compression is an important application of linear algebra. The need to minimize the
amount of digital information stored and transmitted is an ever growing concern in the modern
world. Singular Value Decomposition is an effective tool for minimizing data storage and data
transfer. This report explores image compression and noise reduction through the use of
singular value decomposition on image matrices.
𝐴 = 𝑈ΣV ∗
Where 𝑈 ∈ 𝐶 𝑚×𝑚 and 𝑉 ∈ 𝐶 𝑛×𝑛 are orthogonal. Σ ∈ 𝐶 𝑚×𝑛 is diagonal, and has non-negative
diagonal elements ordered in non-increasing order.
A matrix 𝐴 ∈ 𝐶 𝑚×𝑛 maps from 𝑅 𝑛 to 𝑅 𝑚 . The SVD decomposition reduces A to the clearest
representation of that map. The columns of 𝑈 and 𝑉 form orthonormal bases of 𝑅 𝑚 and 𝑅 𝑛
respectively. The diagonal elements of Σ are the amplifications of the map.
104 − 𝜆 −72
𝐴∗ 𝐴−= ( )
−72 146 − 𝜆
det(𝐴𝐴∗ − 𝜆𝐼) = (104 − 𝜆)(146 − 𝜆) − 722 = 10000 − 250𝜆 + 𝜆2 = 0
𝜆1 = 200
𝜆2 = 50
For 𝜆1 = 200, (𝐴∗ 𝐴 − 𝜆1 𝐼)𝑣1 = 0
−96 −72 0
( | )
−72 −96 0
Thus the elements of 𝑣1 must satisfy
72𝑣1 2 = −96𝑣11 ⇒ 𝑣1 2 = − 4⁄3 𝑣1 2
or
1
𝑣1 = [− 4⁄ ] 𝑣11
3
1
To normalize the eigenvector, let 𝑣1 2 = 2
√4⁄ +1
3
0.6
𝑣1 = [ ]
−0.8
−0.8
In the same way 𝑣2 = [ ] is obtained.
−0.6
As 𝐴𝑣𝑖 = 𝜎𝑖 𝑢𝑖
1 −2 11 0.6 − √2⁄2
𝑢1 = ( )[ ]=[ ]
√200 −10 5 −0.8 − √2⁄2
The 2-norm of the error matrix or the discarded portion of 𝐴 relative to the 2-norm of 𝐴 is
‖𝐴 − 𝐴𝑘 ‖2 𝜎𝑘+1
=
‖𝐴‖2 𝜎1
The root mean square error of an approximation is
2
√∑𝑚,𝑛 (𝐴𝑖𝑗 − 𝐴𝑘𝑖𝑗 )
𝑛
The compression ratio is the original to approximate matrix storage usage
𝑚𝑛
𝐶𝑅 =
𝑘(𝑚 + 𝑛 + 1)
Example
Compress this 600 ∗ 480 colored image:
Figure 1 Original Image
Code
clear
clc
a=imread('a.jpg');
[m,n,d]=size(a);
kmax=floor((m*n)/(m+n+1));
da=double(a);
U=zeros(m,m);S=zeros(m,n);V=zeros(n,n);e=zeros(kmax,d);cr=zeros(kmax,1);rmse=zeros(kmax,d
);
for i=1:d
[U(:,:,i),S(:,:,i),V(:,:,i)]=svd(da(:,:,i));
end
for k=1:kmax
ca=zeros(m,n,d);
cr(k)=m*n/(k*(m+n+1));
for i=1:d
cai=zeros(m,n,d);
[ca(:,:,i),cai(:,:,i)]=deal(U(:,1:k,i)*S(1:k,1:k,i)*V(:,1:k,i)');
e(k,i)=S(k+1,k+1,i)/S(1,1,i);
rmse(k,i)=sqrt(sum(sum(((da(:,:,i)-ca(:,:,i)).^2)))/(m*n));
imwrite(uint8(cai), sprintf('%dk%d.jpg',k,i));
end
imwrite(uint8(ca), sprintf('%dk.jpg', k));
end
figure
p=plot(1:kmax,e,'MarkerEdgeColor','r','MarkerEdgeColor','g');
set(p,{'color'},{'red';'green';'blue'})
xlabel('Approximation Rank k');
ylabel('Relative 2-Norm');
xlim([1 kmax])
legend('Red','Green','Blue')
grid on
figure
p=plot(1:kmax,rmse,'MarkerEdgeColor','r','MarkerEdgeColor','g');
set(p,{'color'},{'red';'green';'blue'})
xlabel('Approximation Rank k');
ylabel('RMS Erorr');
xlim([1 kmax])
legend('Red','Green','Blue')
grid on
figure
plot(1:kmax,cr);
xlabel('Approximation Rank k');
ylabel('Compression Ratio');
xlim([1 kmax])
grid on
Results
The reconstructed images of lower rank approximations 𝑘 = 5,10,50 and 100 are showed in
Figure 2. The exact values for said ranks properties are shown in Table 1. On visual inspection,
approximations become acceptable around 𝐴50 , and indistinguishable from the original
around 𝐴100 . The relative 2-norm, RMS error and compression ratio versus the approximation
rank are shown in Figure 3, Figure 4 and Figure 5 respectively. The relative 2-norm decays
exponentially as approximation rank increases. The error is less than 1.5% for 𝑘 > 50 at all base
levels. The compression ratio is generally higher for larger matrices (see formula).
Noise Reduction
Consider an SVD decomposition of a noisy matrix. The smallest singular values that mainly
represent noise are discarded to de-noise that matrix. Segmenting the matrix into blocks allows
for better de-noising results compared to applying SVD to the entire matrix as each block is
affected by noise differently.