Rudin Osher Fatemi
Rudin Osher Fatemi
Rudin Osher Fatemi
North-Holland
A constrained optimization type of numerical algorithm for removing noise from images is presented. The total
variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed
using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time
dependent partial differential equation on a manifold determined by the constraints. As t---~0othe solution converges to a
steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be
state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could
be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of
the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto
the constraint set.
time dependent nonlinear PDE, where the con- better than the MEM) with the same
straints are determined by the noise statistics. constraints)- see e.g. ref. [5].
Traditional methods attempt to reduce/ The L 1 norm is usually avoided since the
remove the noise component prior to further variation of expressions like Salu[ dx produces
image processing operations. This is the ap- singular distributions as coefficients (e.g. 6 func-
proach taken in this paper. However, the same tions) which cannot be handled in a purely alge-
TV/L1 philosophy can be used to design hybrid braic framework. However, if L 2 and L 1 approxi-
algorithms combining denoising with other noise mations are put side by side on a computer
sensitive image processing tasks. screen, it is clear that the L 1 approximation looks
better than the "same" L 2 approximation. The
" s a m e " means subject to the same constraints.
2. Nonlinear partial differential equations based This may be at least partly psychological; how-
denoising algorithms. ever, it is well known in shock calculations that
the L 1 n o r m of the gradient is the appropriate
Let the observed intensity function u0(x, y) space. This is basically the space of functions of
denote the pixel values of a noisy image for x, bounded total variation: BV. For free, we get the
y ~ O. Let u(x, y) denote the desired clean removal of spurious oscillations, while sharp sig-
image, so nals are preserved in this space.
In ref. [6] the first author has introduced a
Uo(X, y) = u(x, y) + n(x, y), (2.1) novel image enhancement technique, called
Shock Filter. It had analogy with shock wave
when n is the additive noise. calculations in computational fluid mechanics.
We, of course, wish to reconstruct u from u 0. The formation of discontinuities without oscilla-
Most conventional variational methods involve a tions and relevance of the TV norm was ex-
least squares L 2 fit because this leads to linear plored here.
equations. The first attempt along these lines was In a paper written by the first two authors [7],
made by Phillips [1] and later refined by Twomey the concept of total variation preserving en-
[2,3] in the one-dimensional case. In our two- hancement was further developed. Finite differ-
dimensional continuous framework their con- ence schemes were developed there which were
strained minimization problem is used to enhance mildly blurred images signifi-
cantly while preserving the variation of the origi-
minimize f (Uxx + Uyy) 2 (2.2a) nal image.
Additionally, in [8], Alvarez, Lions and Morel
devised an interesting stable image restoration
subject to constraints involving the mean algorithm based on mean curvature motion, see
also ref. [9]. The mean curvature is just the
f u= f Uo (2.2b) Euler-Lagrange derivative of the variation.
We therefore state that the space of BV func-
and standard deviation tions is the proper class for many basic image
processing tasks.
Thus, our constrained minimization problem
f(u - u0) 2 = tr z . (2.2c)
is:
The resulting linear system is now easy to solve minimize ~ x 2 + Uy2dx dy (2.3a)
using modern numerical linear algebra. How- a
ever, the results are again disappointing (but subject to constraints involving u 0.
L.I. Rudin et al. / Noise removal algorithms 261
In our work so far we have taken the same two As t increases, we approach a denoised version
constraints as above: of our image.
We must compute A(t). We merely multiply
f u dx dy = f u o dx d y . (2.3b) (2.5a) by (u - u0) and integrate by parts over 12.
n 12
If steady state has been reached, the left side of
(2.5a) vanishes. We then have
This constraint signifies the fact that the white
noise n(x, y) in (2.1) is of zero mean and
A- 2o.2 + Uy
f
0
l ( u _ u0) 2 d x d y = Or2 where o" > 0 is given
( (u°)xux
2 2 q'-
(u°)ruY
2
~] dx d y .
2,/3
(2.6)
(2.3c)
The second constraint uses a priori information This gives us a dynamic value A(t), which
that the standard deviation of the noise n(x, y) is appears to converge as t---~oo. The theoretical
or. justification for this approach comes from the
Thus we have one linear and one nonlinear fact that it is merely the gradient-projection
constraint. The method is totally general as re- method of Rosen [14].
gards number and shape of constraints. We again remark that (2.5a) with A = 0 and
We arrive at the Euler-Lagrange equations right part multiplied by [Vu I was used in ref. [8]
as a model for smoothing and edge detection.
Following ref. [9] we note that this equation
moves each level curve of u normal to itself with
normal velocity equal to the curvature of the
- A1 - A2(u - Uo) in 12, with (2.4a)
level surface divided by the magnitude of the
Ou gradient of u. Our additional constraints are
On 0 on the boundary of 12 = 012. (2.4b) needed to prevent distortion and to obtain a
nontrivial steady state.
The solution procedure uses a parabolic equa- We remark that Geman and Reynolds, in a
tion with time as an evolution parameter, or very interesting paper [10], proposed minimizing
equivalently, the gradient descent method. This various nonlinear functionals of the form
means that we solve
uy f q ~ ( ~ x2 + u 2) dx dy
12
re(a, b) = m i n m o d ( a , b)
t/1=nAt, n=0,1,..., (2.7b)
= - - - +
T h e modified initial data are chosen so that 20.2 .. (A+uq)
the constraints are both satisfied initially, i.e. q~
has m e a n zero and L 2 n o r m one. (A+uq)(A+u,j)
x 0 x /1
+--£ ,A +uq)
. , 2 + (m(A+uq,
. Ay-un))2)l/2ij/::
A step size restriction is i m p o s e d for stability:
A+ uq
+ A y_ y ..... u n Axun))2) 1/2 At
(A+uq + ~,m~a+ q, __ 0,, , h-~ ~< c . (2.9d)
- At A"(u~ - Uo(ih, j h ) ) , (2.8a)
for i, j = l , . . . , N , 3. Results
Fig. 1. (a) "Bars". (b) Plot of (a). (e) Plot of noisy "bars", SNR = 1.0. (d) Noisy "bars", SNR = 1.0. (e) Plot of the
reconstruction from (d). (e) TV reconstruction from (d). (g) Plot of the reconstruction error.
264 L.I. Rudin et al. / Noise removal algorithms
Noisy Signal
Recovered Signal
(c
Error
(e)
Fig. 2. (a) Plot of fig. la plus noise, S N R = 0.5. (b) Noisy fig. la, SNR = 0.5. (c) Plot of the reconstruction from (b). TV
reconstruction from (b). (e) Plot of the reconstruction error.
L.I. Rudin et al. I Noise removal algorithms 265
0 I
--III
2~
m
'
3,,,,-
m
III fll :.t
Ill
tlI =
-- :~
'~
I II ~ .~:;
4 - - - I!! °:!:ill Itl E :
II1~~ ~
s = iii .__0
III " " I
Fig. 3. (a) "Resolution Chart". (b) Noisy "Resolution Chart", SNR = 1.0. (c) Wiener filter reconstruction from (b). (d) TV
reconstruction from (b).
266 L.I. Rudin et al. / Noise removal algorithms
from fig. 3a. This is 38 by 38 pixels wide 256 gray sharp signal and fig. If shows the recovered
levels black and white original image. Fig. la signal. Finally, fig. lg shows the error which is
shows the original signal. Fig. lb shows its inten- fairly "hollow". It is zero both within the origi-
sity plot. Fig. lc shows the intensity of the noisy nal steps and also beyond a few pixels outside of
signal with additive Gaussian white noise, signal them. Fig. 2a shows the intensity plot of a noisy
to noise ratio SNR 1. Fig. ld shows the noisy signal when SNR = 1, twice as much Gaussian
signal. Fig. le shows a graph of the recovered white noise as signal. Fig. 2b shows the noisy
Fig. 4. (a) "Airplane". (b) Noisy "Airplane", SNR = 1.0. (e) Wiener filter reconstruction from (b). (d) TV reconstruction from
(b).
L . L Rudin et al. / Noise removal algorithms 267
image. Fig. 2c shows the intensity plot of the dard black and white images taken from the
recovered signal and fig. 2d shows the recovered USC IPI image data base. Fig. 3a shows the
image. Finally, fig. 2e shows the almost "hollow" original 256 x 256 pixels resolution chart. Fig. 3b
error. shows the result of adding Gaussian white noise,
It appears that our denoising procedure beats SNR 1. Fig. 3c shows the result of our denoising
the capability of the human eye - see figs. lb, 2b algorithm. Finally fig. 3d shows the result of
and 2c. using a Weiner filter where the power spectrum
The remaining figures are 256 gray level stan- was estimated from fig. 3b. Notice fig. 3d has a
Fig. 5. (a) "Tank". (b) Noisy "Tank", SNR = 1.0. (c) Wiener filter reconstruction from (b). (d) TV reconstruction from (b),
268 L.I. Rudin et al. / Noise removal algorithms
lot of background noise which makes it prob- [3] S. Twomey, On the numerical solution of Fredholm
lematic for automatic processing. Fig. 4a shows a integral equations of the first kind by the inversion of
the linear system produced by quadrature, J. ACM 10
256 × 256 airplane in the desert (clean image). (1963) 97.
Fig. 4b shows the result of adding Gaussian [4] S. Twomey, The application of numerical filtering to the
white noise - SNR 1. Fig. 4c shows the result of solution of integral equations encountered in indirect
sensing measurements, J. Franklin Inst. 297 (1965) 95.
a denoising via our algorithm. Fig. 4d shows the
[5] B.R. Hunt, The application of constrained least squares
result of a Weiner filter denoising with the true estimation to image restoration by digital computer,
spectrum estimated from the noisy image, via a IEEE Trans. Comput. 22 (1973) 805.
moving average. Fig. 5a shows the original [6] L. Rudin, Images, numerical analysis of singularities
and shock filters, Caltech, C.S. Dept. Report #TR:
512 × 512 picture of a tank. Fig. 5b shows the 5250:87 (1987).
result of adding Gaussian white noise SNR 4. [7] S. Osher and L.I. Rudin, Feature oriented image en-
Fig. 5c shows a Weiner filter denoising with hancement using shock filters, SIAM J. Num. Anal. 27
(1990) 919.
spectrum estimates from fig. 5b. Fig. 5d shows [8] L. Alvarez, P.L. Lions and J.M. Morel, Image selective
our algorithm applied to the same window. smoothing and edge detection by nonlinear diffusion,
Notice that the discontinuities are much clearer SIAM J. Num. Anal. 29 (1992) 845.
[9] S. Osher and J. Sethian, Fronts propagating with
in the last case. Also Wiener restoration has
curvature dependent speed: Algorithms based on a
oscillatory artifacts. Hamilton-Jacobi formulation, J. Comput. Phys. 79
Our recent experiments indicate that the use (1985) 12.
of more constraints (information about the noise [10] D. Geman and G. Reynolds, Constrained restoration
and the recovery of discontinuities, preprint (1990).
and the image) in this method will yield more [11] E. Fatemi, S. Osher and L.I. Rudin, Removing noise
details of the solution in our denoising pro- without excessive blurring, Cognitech Report #5, (12/
cedure. 89), delivered to DARPA US Army Missile Command
under contract #DAAH01-89-C-0768.
[12] L.I. Rudin and S. Osher, Reconstruction and enhance-
ment of signals using non-linear non-oscillatory varia-
tional methods, Cognitech Report #7 (3/90), delivered
References to DARPA US Army Missile Command under contract
#DAAH01-89-C-0768.
[13] Y. Dodge, Statistical data analysis based on the Lj norm
[1] B.R. Frieden, Restoring with maximum likelihood and and related methods (North-Holland, Amsterdam,
maximum entropy, J. Opt. Soc. Am. 62 (1972) 511. 1987).
[2] D.L. Phillips, A technique for the numerical solution of [14] J.G. Rosen, The gradient projection method for non-
certain integral equations of the first kind, J. ACM 9 linear programming, Part II, nonlinear constraints, J.
(1962) 84. Soc. Indust. Appl. Math. 9 (1961) 514.