I created this Tutorial and this Gizmo because every time I have trouble remembering the various matrices. This guide is a collection of the more popular articles about the image manipulation generally.
You can download the Gizmo for Nuke here:
Download from Nukepedia
Useful Links
 http://beej.us/blog/data/convolutionimageprocessing
 http://web.pdx.edu/~jduh/courses/Archive/geog481w07/Students/Ludwig_ImageConvolution.pdf
 http://aishack.in/tutorials/imageconvolutionexamples/
 http://en.wikipedia.org/wiki/Kernel_(image_processing)
 http://lodev.org/cgtutor/filtering.html
 http://homepages.inf.ed.ac.uk/rbf/HIPR2/mean.htm
 http://code.tutsplus.com/tutorials/imagefilteringinpythoncms29202
 http://lodev.org/cgtutor/filtering.html
 http://setosa.io/ev/imagekernels/
 http://docs.gimp.org/en/pluginconvmatrix.html
 http://en.wikipedia.org/wiki/Kernel_(image_processing)
 http://studentguru.gr/b/jupiter/archive/2009/10/14/creatinganimageprocessinglibrarywithcpart1
 http://www.songho.ca/dsp/convolution/convolution2d_example.html
 http://www.unit.eu/cours/videocommunication/Linear_filtering.pdf
Overview
In image processing, a kernel, convolution matrix, or mask is a small matrix. It is used for blurring, sharpening, embossing, edge detection, and more. This is accomplished by doing a convolution between a kernel and an image. Convolution is the treatment of a matrix by another one which is called “kernel”.
You can use different kind of matrix: 3x3, 5x5, 7x7, and so on. We will consider only 3x3 matrices, they are the most used and they are enough for all effects you want.
In the next Formula you will find points as [y,x] instead of [x,y] just because that’s the way the matrices are declared/used in the library. So, if we apply the convolution operation on a pixel (y,x) of the input image I, the resulting pixel in the output image O will be:
The filter F should be normalized (sum of all weights equals 1), otherwise we need to divide O[y,x] with a divisor (or factor) D. D is the the sum of all coefficients in the Filter Matrix. Some filters, like edgedetections ones, have a coefficients’ sum of 0. In those cases we need to avoid division by D = 0. Finally, sometimes we also add a constant value (Offset) to the result O[y,x].
So, in general, we can say that the final formula for computing a pixel at (y,x) is:
I = image. The data value of a pixel that corresponds to the position i,j
F = Filter Matrix. The coefficient of a convolution kernel at position i,j
D = Divisor (or 1/F). This is the sum of the coefficient of a convolution kernel, or 1 if the sum is equal to 0
L = is the dimension of a kernel. (L=3 if kernel is 3x3)
First, flip the kernel, which is the shaded box, in both horizontal and vertical direction. Then, move it over the input array. If the kernel is centered (aligned) exactly at the sample that we are interested in, multiply the kernel data by the overlapped input data.
The result will be:
If we want to calculate the Output value, we have to apply this formula:
At this link you can find one of the best example about the 2D Convolution.
Here a simple example. You can see how to apply a Edge Detection matrix 3x3 to an image.
On the left is the image matrix: each pixel is marked with its value. The element at coordinates [2, 2] is the central element with red color. The kernel action area has a blue border. In the middle is the kernel (Edge Detection matrix) and, on the right is the convolution result.
Here is what happened: the filter read successively, from left to right and from top to bottom, all the pixels of the kernel action area. It multiplied the value of each of them by the kernel corresponding value and added results.The initial pixel has become : (47*0)+(22*1)+(25*0) + (52*1)+(51*4)+(50*1) + (35*0)+(47*1)+(49*0) = 375. (the filter doesn't work on the image but on a copy).
The result of previous calculation will be divided by this divisor. You will hardly use 1, which lets result unchanged, and 9 or 25 according to matrix size, which gives the average of pixel values. Divisor is equal to: 0 + 1 + 0 + 1 + 4 + 1 + 0 + 1 + 0 = 8
The Offset is added to the division result. This is useful if result may be negative. This offset may be negative. In this case Offset is 0
So the final value of the pixel is 375 / Divisor = 375 / 8 = 46,875 = 47
In Nuke
In Nuke you can introduce a Matrix Node from here: Filter → Matrix... and select the dimension of the matrix. In this case 3x3.
If Normalize is deselected, the value of the pixel will be 375 in the example above, otherwise, if selected the result will be Normalized, then divided by the divisor, so 46,875
When you will find this Textfield, you can copypaste the values directly in the Matrix 3x3 node in Nuke.
Matrices
 Original Image
 01. Identity
The identity function is a function that always returns the same value that was used as its argument. In equations, the function is given by f(x) = x
The function that assigns every real number to itself is called an Identity function and is usually denoted by I. So, the function f:R>R defined by f(x) = x for all x in R is called the identity function. The domain and range of the Identity function are both equal to R. The Identity function is a bijective (both injective and surjective). For example, the Identity function on a set A is the function that does nothing to each element of A. Under composition, an Identity function is "neutral": if f is any function from X to Y, then f o I = f and I o f = f.
https://en.wikipedia.org/wiki/Identity_function
0  0  0 
0  1  0 
0  0  0 
 02. Smoothing  3x3 convolution kernel
https://en.wikipedia.org/wiki/Box_blur
1  1  1 
1  1  1 
1  1  1 
 03. Gaussian Blur
A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function. It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an outoffocus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a preprocessing stage in computer vision algorithms in order to enhance image structures at different scales—see scale space representation and scale space implementation.
Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a twodimensional Weierstrass transform. By contrast, convolving by a circle (i.e., a circular box blur) would more accurately reproduce the bokeh effect. Since the Fourier transform of a Gaussian is another Gaussian, applying a Gaussian blur has the effect of reducing the image's highfrequency components; a Gaussian blur is thus a low pass filter.
https://en.wikipedia.org/wiki/Gaussian_blur
0  1  0 
1  4  1 
0  1  0 
 04. Blur
https://en.wikipedia.org/wiki/Kernel_(image_processing)
1/16 * 

 05. Motion Blur
If you want to find more information about this convolution matrix, you need to find out the Identity matrix. The identity matrix, or sometimes ambiguously called a unit matrix, of size n is the n × n square matrix with ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is immaterial or can be trivially determined by the context
https://en.wikipedia.org/wiki/Identity_matrix
1  0  0 
0  1  0 
0  0  1 
 06. Average / Mean filter
Mean filtering is a simple, intuitive and easy to implement method of smoothing images, i.e. reducing the amount of intensity variation between one pixel and the next. It is often used to reduce noise in images
The idea of mean filtering is simply to replace each pixel value in an image with the mean (`average') value of its neighbors, including itself. This has the effect of eliminating pixel values which are unrepresentative of their surroundings. Mean filtering is usually thought of as a convolution filter. Like other convolutions it is based around a kernel, which represents the shape and size of the neighborhood to be sampled when calculating the mean. Often a 3×3 square kernel is used, as shown in Figure 1, although larger kernels (e.g. 5×5 squares) can be used for more severe smoothing. (Note that a small kernel can be applied more than once in order to produce a similar but not identical effect as a single pass with a large kernel.)
https://homepages.inf.ed.ac.uk/rbf/HIPR2/mean.htm
https://www.cs.auckland.ac.nz/courses/compsci373s1c/PatricesLectures/Image%20Filtering.pdf
http://matlabtricks.com/post11/movingaveragebyconvolution
1/9  1/9  1/9 
1/9  1/9  1/9 
1/9  1/9  1/9 
 07. Antialiasing filter (1)
In digital signal processing, spatial antialiasing is the technique of minimizing the distortion artifacts known as aliasing when representing a highresolution image at a lower resolution. Antialiasing is used in digital photography, computer graphics, digital audio, and many other applications.
In computer graphics, antialiasing improves the appearance of polygon edges, so they are not "jagged" but are smoothed out on the screen. However, it incurs a performance cost for the graphics card and uses more video memory. The level of antialiasing determines how smooth polygon edges are (and how much video memory it consumes).
http://www.nukepedia.com/gizmos/filter/antialiasingfilter
https://en.wikipedia.org/wiki/Spatial_antialiasing
http://cs.boisestate.edu/~alark/cs464/lectures/AntiAliasing.pdf
http://community.foundry.com/discuss/topic/108394
0  1  0 
1  2  1 
0  1  0 
 08. Antialiasing filter (2)
1  2  1 
2  4  2 
1  2  1 
 09. Sharpen
The sharpen kernel emphasizes differences in adjacent pixel values. This makes the image look more vivid
Sharpening enhances the definition of edges in an image. Whether your images come from a digital camera or a scanner, most images can benefit from sharpening. The degree of sharpening needed varies depending on the quality of the digital camera or scanner. Keep in mind that sharpening cannot correct a severely blurred image.
0  1  0 
1  5  1 
0  1  0 
 10. Intensified Sharpen
1  1  1 
1  9  1 
1  1  1 
 11. Edge Detect
Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding discontinuities in onedimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a fundamental tool in image processing, machine vision and computer vision, particularly in the areas of feature detection and feature extraction
The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world. It can be shown that under rather general assumptions for an image formation model, discontinuities in image brightness are likely to correspond to: discontinuities in depth, discontinuities in surface orientation, changes in material properties and variations in scene illumination
https://en.wikipedia.org/wiki/Edge_detection
0  1  0 
1  4  1 
0  1  0 
 12. Edge Enhance
Edge enhancement is an image processing filter that enhances the edge contrast of an image or video in an attempt to improve its acutance (apparent sharpness).
The filter works by identifying sharp edge boundaries in the image, such as the edge between a subject and a background of a contrasting color, and increasing the image contrast in the area immediately around the edge. This has the effect of creating subtle bright and dark highlights on either side of any edges in the image, called overshoot and undershoot, leading the edge to look more defined when viewed from a typical viewing distance
https://en.wikipedia.org/wiki/Edge_enhancement
https://softwarebydefault.com/tag/edgeenhance/
0  0  0 
1  1  0 
0  0  0 
 13. Emboss
The emboss kernel (similar to the sobel kernel and sometimes referred to mean the same) givens the illusion of depth by emphasizing the differences of pixels in a given direction. In this case, in a direction along a line from the top left to the bottom right.
Image embossing is a computer graphics technique in which each pixel of an image is replaced either by a highlight or a shadow, depending on light/dark boundaries on the original image. Low contrast areas are replaced by a gray background. The filtered image will represent the rate of color change at each location of the original image. Applying an embossing filter to an image often results in an image resembling a paper or metal embossing of the original image, hence the name.
https://en.wikipedia.org/wiki/Image_embossing
https://docs.gimp.org/en/pluginemboss.html
2  1  0 
1  1  1 
0  1  2 
 14. Outline
An outline kernel (also called an "edge" kernel) is used to highlight large differences in pixel values. A pixel next to neighbor pixels with close to the same intensity will appear black in the new image while one next to neighbor pixels that differ strongly will appear white
1  1  1 
1  8  1 
1  1  1 
 15. Top sobel
Sobel kernels are used to show only the differences in adjacent pixel values in a particular direction
The Sobel operator, sometimes called the Sobel–Feldman operator or Sobel filter, is used in image processing and computer vision, particularly within edge detection algorithms where it creates an image emphasising edges
https://en.wikipedia.org/wiki/Sobel_operator
http://homepages.inf.ed.ac.uk/rbf/HIPR2/sobel.htm
1  2  1 
0  0  0 
1  2  1 
 16. Bottom sobel
1  2  1 
0  0  0 
1  2  1 
 17. Right sobel
1  0  1 
2  0  2 
1  0  1 
 18. Left sobel
1  0  1 
2  0  2 
1  0  1 
 19. Roberts Cross convolution kernels
The Roberts Cross operator performs a simple, quick to compute, 2D spatial gradient measurement on an image. It thus highlights regions of high spatial frequency which often correspond to edges. In its most common usage, the input to the operator is a grayscale image, as is the output. Pixel values at each point in the output represent the estimated absolute magnitude of the spatial gradient of the input image at that point
In theory, the operator consists of a pair of 2×2 convolution kernels as shown below. One kernel is simply the other rotated by 90°. This is very similar to the Sobel operator
http://homepages.inf.ed.ac.uk/rbf/HIPR2/roberts.htm
https://en.wikipedia.org/wiki/Roberts_cross
0  1 
1  0 
0  1 
1  0 
1  0 
0  1 
1  0 
0  1 