Neighborhood operation

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Script error: No such module "Unsubst".

In computer vision and image processing a neighborhood operation is a commonly used class of computations on image data which implies that it is processed according to the following pseudo code:

Visit each point p in the image data and do {
    N = a neighborhood or region of the image data around the point p
    result(p) = f(N)
}

This general procedure can be applied to image data of arbitrary dimensionality. Also, the image data on which the operation is applied does not have to be defined in terms of intensity or color, it can be any type of information which is organized as a function of spatial (and possibly temporal) variables in Template:Var.

The result of applying a neighborhood operation on an image is again something which can be interpreted as an image, it has the same dimension as the original data. The value at each image point, however, does not have to be directly related to intensity or color. Instead it is an element in the range of the function Template:Var, which can be of arbitrary type.

Normally the neighborhood Template:Var is of fixed size and is a square (or a cube, depending on the dimensionality of the image data) centered on the point Template:Var. Also the function Template:Var is fixed, but may in some cases have parameters which can vary with Template:Var, see below.

In the simplest case, the neighborhood Template:Var may be only a single point. This type of operation is often referred to as a point-wise operation.

Examples

The most common examples of a neighborhood operation use a fixed function Template:Var which in addition is linear, that is, the computation consists of a linear shift invariant operation. In this case, the neighborhood operation corresponds to the convolution operation. A typical example is convolution with a low-pass filter, where the result can be interpreted in terms of local averages of the image data around each image point. Other examples are computation of local derivatives of the image data.

It is also rather common to use a fixed but non-linear function Template:Var. This includes median filtering, and computation of local variances. The Nagao-Matsuyama filter is an example of a complex local neighbourhood operation that uses variance as an indicator of the uniformity within a pixel group. The result is similar to a convolution with a low-pass filter with the added effect of preserving sharp edges.[1] [2]

There is also a class of neighborhood operations in which the function Template:Var has additional parameters which can vary with Template:Var:

Visit each point p in the image data and do {
    N = a neighborhood or region of the image data around the point p
    result(p) = f(N, parameters(p))
}

This implies that the result is not shift invariant. Examples are adaptive Wiener filters.

Implementation aspects

The pseudo code given above suggests that a neighborhood operation is implemented in terms of an outer loop over all image points. However, since the results are independent, the image points can be visited in arbitrary order, or can even be processed in parallel. Furthermore, in the case of linear shift-invariant operations, the computation of Template:Var at each point implies a summation of products between the image data and the filter coefficients. The implementation of this neighborhood operation can then be made by having the summation loop outside the loop over all image points.

An important issue related to neighborhood operation is how to deal with the fact that the neighborhood Template:Var becomes more or less undefined for points Template:Var close to the edge or border of the image data. Several strategies have been proposed:

  • Compute result only for points Template:Var for which the corresponding neighborhood is well-defined. This implies that the output image will be somewhat smaller than the input image.
  • Zero padding: Extend the input image sufficiently by adding extra points outside the original image which are set to zero. The loops over the image points described above visit only the original image points.
  • Border extension: Extend the input image sufficiently by adding extra points outside the original image which are set to the image value at the closest image point. The loops over the image points described above visit only the original image points.
  • Mirror extension: Extend the image sufficiently much by mirroring the image at the image boundaries. This method is less sensitive to local variations at the image boundary than border extension.
  • Wrapping: The image is tiled, so that going off one edge wraps around to the opposite side of the image. This method assumes that the image is largely homogeneous, for example a stochastic image texture without large textons.

References

  • Script error: No such module "citation/CS1".
  • Script error: No such module "citation/CS1".
  1. Script error: No such module "citation/CS1".
  2. Script error: No such module "citation/CS1".