<< Chapter < Page | Chapter >> Page > |
Deblurring or decovolution aims to recover the unknown image from and based on ( ). When is unknown or only an estimate of it is available, recovering from is called blind deconvolution. Throughout this module, we assume that is known and is either Gaussian or impulsive noise. When is equal to the Dirac delta, the recovery of becomes a pure denoising problem. In the rest of this section, we review theTV-based variational models for image restoration and introduce necessary notation for analysis.
The TV regularization was first proposed by Rudin, Osher and Fatemi in for image denoising, and then extended to image deblurring in . The TV of is defined as
When does not exist, the TV is defined using a dual formulation , which is equivalent to ( ) when is differentiable. We point out that, in practical computation, discrete forms of regularization are always used wheredifferential operators are replaced by ceratin finite difference operators.We refer TV regularization and its variants as TV-like regularization. In comparison toTikhonov-like regularization, the homogeneous penalty on image smoothness in TV-like regularization can better preserve sharp edgesand object boundaries that are usually the most important features to recover. Variational modelswith TV regularization and fidelity has beenwidely studied in image restoration; see e.g. , and references therein. For fidelity with TV regularization, itsgeometric properties are analyzed in , , . The superiority of TV over Tikhonov-like regularization was analyzedin , for recovering images containing piecewise smooth objects.
Besides Tikhonov and TV-like regularization, there are other well studied regularizers in the literature, e.g. the Mumford-Shahregularization . In this module, we concentrate on TV-like regularization. We derive fast algorithms, study theirconvergence, and examine their performance.
As used before, we let be the 2-norm. In practice, we always discretize an image defined on , and vectorize the two-dimensional digitalized image into a long one-dimensional vector. We assume that is a square region in . Specifically, we first discretize into a digital image represented by a matrix . Then we vectorize column by column into a vector , i.e.
where denotes the th component of , is the component of at th row and th column, and and are determined by and . Other quantities such as the convolution kernel , additive noise , and the observation are all discretized correspondingly. Now we present the discrete forms of the previously presented equations.The discrete form of ( ) is
where in this case, are all vectors representing, respectively, the discrete forms of the originalimage, additive noise and the blurry and noisy observation, and is a convolution matrix representing the kernel . The gradient is replaced by certain first-order finite difference at pixel . Let be a first-order local finite difference matrix at pixel in horizontal and vertical directions. E.g. when the forward finite difference is used, we have
Notification Switch
Would you like to follow the 'The art of the pfug' conversation and receive update notifications?