It is not uncommon for vision systems to have variations in brightness over the full image frame for various reasons. These variations will negatively affect algorithms, like object detection or code reading, developed to enable the various tasks of a vision system.
“Flat-field correction is a technique used to improve quality in digital imaging. The goal is to remove artifacts from 2-D images that are caused by variations in the pixel-to-pixel sensitivity of the detector and/or by distortions in the optical path. It is a standard calibration procedure in everything from pocket digital cameras to giant telescopes.
Flat fielding refers to the process of compensating for different gains and dark currents in a detector. Once a detector has been appropriately flat-fielded, a uniform signal will create a uniform output (hence flat-field). This then means any further signal is due to the phenomenon being detected and not a systematic error.”
Wikipedia: https://en.wikipedia.org/wiki/Flat-field_correction [05.06.2018]
It is important to understand the limitations of the algorithm. If the difference in brightness in an image is greater than 25 % to 30 % we think that are issues with the choice of components and you might want to consider changing components of the system as the algorithm might not give the expected results.
For best results the measurements should be taken in the real environment of the complete vision system setup.
It is important to understand that only static effects can be reduced. That means if you change the lens, aperture or light-sources you need to re-calibrate the system.
After setting up the Vision System and choosing the desired settings for aperture and light settings the example will guide you to create 2 data-sets, one for the darkest possible image (dark-field frame) and one for the lightest image (light frame).
The example will take several images for each set and build an average to reduce the influence of sensor-noise. Those averaged images are then used to calculate the necessary data to correct images taken by the vision system.
You need a clean and white target for the light reference, any artefacts (like dirt or even the texture of paper) might be visible after the calibration! The target must cover the whole surface you want to calibrate. Ideal would be a calibrated target but it is not strictly necessary.
This is an optional step to achieve full flat-field correction. If you require just a shading correction you can skip this step. The measurement is only necessary once for a specific camera.
If a non-ideal target is used (e.g. Paper) it is possible that structures or dirt on the target are visible in the corrected image. In this case the Baumer GAPI box- and median-filters can help reduce the unwanted artefacts. The filters should only be used as a last resort as they will effect the shading-corection negatively.
Example how to use the filters:
bo_uint r = m_pShading->GetFilter(BGAPI2::Ext::Sc::Shading::BoxFilter, true); bo_uint rMin = m_pShading->GetFilterMin(BGAPI2::Ext::Sc::Shading::BoxFilter, true); bo_uint rMax = m_pShading->GetFilterMax(BGAPI2::Ext::Sc::Shading::BoxFilter, true); // Set median-filter radius for light reference m_pShading->SetFilter(BGAPI2::Ext::Sc::Shading::MedianFilter, true, 1); // Set box-filter radius for light reference m_pShading->SetFilter(BGAPI2::Ext::Sc::Shading::BoxFilter, true, 2);
The references taken in the measurement step can now be used to correct the images taken by your Vision System. Usually you want to include this correction in your application; the example will show you what to do.
The example can produce 2 different measurements, if you provide a dark field reference and a light reference a flat-field correction is calculated. If you only provide the light reference the calculation done is a shading correction.
Please contact our Technical & Application Support Center with any questions.
Phone: +49 3528 4386 845
E-mail: [email protected]