Image processing refers to the ability to capture objects on a two-dimensional plane. This has led to image processing being widely used in automated inspections as an alternative to visual inspections. This section introduces CCD (pixel) sensors—the foundation of image processing—and image processing basics.
When taking a picture with a camera, the light reflected from the target is transmitted through the lens, forming an image on the sensor. When a pixel on the sensor receives the light, an electric charge corresponding to the light intensity is generated. The electric charge is converted into an electric signal to obtain the light intensity (concentration value) received by each pixel.
Use of pixel data for image processing
Individual pixel data (In the case of a standard black-and-white camera)
In many vision sensors, each pixel transfers data in 256 levels (8 bit) according to the light intensity. In monochrome (black & white) processing, black is considered to be “0” and white is considered to be “255”, which allows the light intensity received by each pixel to be converted into numerical data This means that all pixels of a CCD have a value between 0 (black) and 255 (white). For example, gray that contains white and black, exactly half and half, is converted into “127”.
An image is a collection of 256-level data
Image data captured with a CCD is a collection of pixel data that make up the CCD, and the pixel data is reproduced as a 256-level contrast data.
As in the example above, image data is represented with values between 0 and 255 levels per pixel. Image processing is processing that finds features on an image by calculating the numerical data per pixel with a variety of calculation methods as shown below.