Estimating the Degradation Function

When attempting to restore an image that has been degraded, it’s critical to estimate the degradation function that caused the image’s decline in quality. There are three main methods for estimating this degradation function: observation, experimentation, and mathematical modeling. These methods form the foundation of image restoration techniques, often involving a process called blind deconvolution, where the true degradation function is not fully known but is estimated through one of the mentioned methods.

1. Estimation by Image Observation

When only the degraded image is available, without prior knowledge of the degradation function (denoted as H), one approach to estimate H involves gathering information directly from the image itself. This method assumes that the degradation occurred through a linear, position-invariant process (meaning that the distortion affects all parts of the image similarly and consistently).

If the image appears blurred, for example, the observer can select a small, well-defined portion of the image that includes distinct structures such as objects or high-contrast areas (i.e., parts of the image where the signal is strong and noise is minimal). This is important because high-contrast regions allow for more accurate analysis by minimizing the interference of noise in the estimation process.

The next step is to process this selected subimage to achieve a result that is as close as possible to the unblurred or original version of the image. Techniques like sharpening filters can be applied to this small area, and in some cases, manual adjustments may be made to improve clarity.

Let the selected subimage from the degraded image be represented by g_s(x, y), and let the processed subimage, which serves as an estimate of the original unblurred image, be denoted as f_s(x, y).

Equation-Based Estimation

From this point, assuming noise is negligible due to the careful selection of a strong-signal region, we can use the following relationship to estimate the degradation function for that area:

Hs(u,v)=Gs(u,v)Fs(u,v)H_s(u, v) = \frac{G_s(u, v)}{F_s(u, v)}

Where:

  • H_s(u, v) is the degradation function for the subimage in the frequency domain.
  • G_s(u, v) represents the Fourier transform of the degraded subimage.
  • F_s(u, v) is the Fourier transform of the processed (sharpened or restored) subimage.

Once this estimation is made, the characteristics of the resulting degradation function can be extended to the entire image. This relies on the assumption of position invariance in the degradation process, meaning the degradation function can be applied consistently across the entire image.

Example: Gaussian Shape Estimation

For example, if a radial plot of H_s(u, v) resembles a Gaussian curve, this information can be used to construct a degradation function on a larger scale, but retaining the same basic shape. This extended function is then applied to the entire image restoration process, using methods like Wiener filtering or inverse filtering, which are designed to reverse the degradation.

Use in Specific Circumstances

The method of estimation by observation is a laborious and time-intensive process, making it suitable only for special cases, such as the restoration of historically significant images. It is generally not practical for large-scale or real-time applications but can be highly effective when attention to detail and precision are required.

Estimation by Experimentation

Illustration of the atmospheric turbulence model
Illustration of the atmospheric turbulence model

Degradation modeling has been instrumental in solving image restoration problems by mathematically representing the various factors that lead to image degradation. These models provide a framework for understanding how to reverse or mitigate the effects of degradation. In some cases, degradation models can even account for environmental factors, such as atmospheric turbulence, that significantly affect image quality.

Hufnagel and Stanley’s Atmospheric Turbulence Model

One of the most well-known models in this context is the Hufnagel and Stanley (1964) model, which addresses image degradation caused by atmospheric turbulence. The model is given by the equation:

H(u,v)=ek(u2+v2)5/6H(u, v) = e^{-k(u^2 + v^2)^{5/6}}

Where:

  • H(u, v) is the degradation function in the frequency domain.
  • k is a constant based on the severity of the turbulence.
  • u and v represent the spatial frequency variables.

This equation closely resembles the Gaussian low-pass filter (LPF), with the primary difference being the exponent of 5/6. In both cases, the model describes the blurring effect but with different levels of intensity depending on the environmental conditions. Figure 5.25 illustrates various levels of turbulence by varying the value of k:

  • k = 0.0025 (severe turbulence)
  • k = 0.001 (mild turbulence)
  • k = 0.00025 (low turbulence)

Each of these values simulates the intensity of atmospheric turbulence on images, typically of size 480 x 480 pixels. This model is often used to approximate how real-world conditions degrade image quality.

Derivation of the Degradation Function for Motion Blur

Another critical form of degradation is motion blur, which occurs when an image is captured while the sensor or the object is in motion. To model this mathematically, we consider the case where the image undergoes uniform linear motion during the time of image acquisition.

Let’s assume that the image experiences a planar motion with components x₀(t) and y₀(t), which are functions of time. The blurred image is then obtained by integrating over the total exposure time T:

g(x,y)=0Tf[xx0(t),yy0(t)]dtg(x, y) = \int_0^T f[x – x_0(t), y – y_0(t)] dt

Where:

  • g(x, y) is the blurred image.
  • f(x, y) is the original (undegraded) image.
  • x₀(t) and y₀(t) are the time-dependent displacement components of motion in the x and y directions.
  • T is the total exposure time.

The corresponding Fourier transform of the blurred image is given by:

G(u,v)=0TF(u,v)ej2π(ux0(t)+vy0(t))dtG(u, v) = \int_0^T F(u, v)e^{-j2\pi(ux_0(t) + vy_0(t))} dt

This equation shows how the degradation in the frequency domain (G(u, v)) relates to the original image in the frequency domain (F(u, v)) and the motion-based degradation function. The key factor that results from this derivation is the degradation function H(u, v), which is defined as:

H(u,v)=0Tej2π(ux0(t)+vy0(t))dtH(u, v) = \int_0^T e^{-j2\pi(ux_0(t) + vy_0(t))} dt

This degradation function captures the effect of motion blur on the image. The exact form of H(u, v) will depend on the nature of the motion. For example, if we assume the image undergoes uniform motion only in the x-direction, where the displacement is linear with respect to time, i.e., x₀(t) = a(t/T), the degradation function simplifies to:

H(u,v)=Tπ(ua)sin(πua)ejπuaH(u, v) = \frac{T}{\pi(ua)} \sin(\pi ua)e^{-j\pi ua}

This equation shows that the degradation function H(u, v) diminishes with increasing spatial frequency u. Moreover, H(u, v) vanishes at certain frequencies where u = n/a, where n is an integer. These are the points where the motion-induced blur is most severe, causing a loss of information at those frequencies.

References

  1. Hufnagel, B. R., & Stanley, N. R. (1964). “Modulation Transfer Function Associated with Image Transmission through Turbulence.” Journal of the Optical Society of America, 54(1), 52-61.
  2. Gonzalez, R. C., & Woods, R. E. (2008). Digital Image Processing (3rd ed.). Prentice Hall.
  3. Wiener, N. (1949). “Extrapolation, Interpolation, and Smoothing of Stationary Time Series.” MIT Press.

Leave a Comment