Resolving power

Share this on social media:


Greg Blackman finds out that in order to acquire a high-quality image from a machine vision camera, a lot depends on the quality of the optics

Machine vision excels in measuring dimensions that are indiscernible to the human eye. Measuring to the manufacturing tolerances for valves for a car engine is a classic example. While camera sensor resolutions have been steadily increasing over the years, it’s not simply the number of Megapixels the camera delivers that determines the quality of the image. Lighting also plays a big role, as does the lens, and it’s really how good the lens is as to whether you’ll get the most out of the specifications of the camera.

‘The lens has a big impact on what resolution the system will be able to image and how small a defect it will be able to pick up,’ states Nicholas James, product line manager at optics specialist Edmund Optics.

As camera sensor technology has advanced pixels have become smaller. Five or so years ago, 1.3 Megapixel sensors were considered high resolution and in a 1/2-inch format the pixel sizes were relatively large. Cameras are now available at 5 and 10 Megapixels in 1/3-inch formats, so the pixel sizes have become minute (a 10 Megapixel, 1/2-inch CMOS sensor can have pixels as small as 2µm). Mark Williamson, director of corporate market development at vision distributor Stemmer Imaging, explains that one of the challenges in machine vision is to differentiate which lenses are suitable for which sensors. ‘It’s not only the circle size of the lens, but how good the lens is at resolving the image onto small pixels,’ he says. ‘A 5 Megapixel camera might cost £250, say, but the pixels might be so small that optically it’s very difficult to achieve the maximum resolution of the sensor unless you pay for an expensive lens, which can be around £800.’

The resolution of a lens is determined by its modulation transfer function (MTF), defined as the number of line-pairs per millimetre the lens can resolve (a line-pair being a sequence of one black and one white line). As alternating black and white lines become closer together, the transition from fully black to fully white eventually becomes blurred so that the lens will only be able to reach a level of grey before having to switch to the next transition. Any detail in the image that’s below this value will be lost. Hence, a low resolution lens will still capture everything in the field of view, but the detail will be less, because the optics can’t translate the change from black to white quickly enough.

‘With cameras being released with smaller pixels, lenses have to have higher MTF to maximise the sensor,’ Williamson notes. ‘Cheaper CCD cameras will typically have smaller pixel sizes to shrink the sensor. However, if you want to get the same image quality at the maximum resolution, you’ll need a more expensive lens.’

Optical distortions

Any optical system will have a certain degree of distortion within it, most of which can be designed out of the system and high quality machine vision lenses will correct for many distortions. Some of the more common types include barrel distortion, where the optical path changes as you move further from the centre line of the lens, creating a fisheye-type effect. This is more prominent, according to Raf Slotwinski, business development manager at Alrad Instruments, for wide angle lenses with short focal lengths, although he adds that low distortion lenses are available at a price. Alrad, based in the UK, represents a number of machine vision lens manufacturers.

Vignetting is another, whereby less light reaches the sensors at the edges of the image. It can be digitally corrected by flat-field correction, but this reduces the dynamic range of the system. Vignetting can be the result of being at the edge of the field of view of the lens or it can be caused by the mechanics of the lens creating a shading effect.

A third distortion is sensor shading. Image sensors have micro-lenses covering each pixel to funnel all the light into the sensitive part of the pixel. Moving out towards the edges of the sensor, the light entering the micro-lenses will be at different angles and therefore will have different focal points, which effectively equates to less light reaching the pixels.

Higher quality lenses will generally have more lens elements, with some exceeding 10 optical elements, to correct for various distortions. ‘Minimising distortion is important for machine vision when making precise measurements of an object,’ notes Williamson.

Imaging at different wavelengths can also create artefacts in the image that need to be ironed out by the optics. ‘As the wavelength alters, the optics diffract the light differently,’ states Williamson. This can potentially be a problem for colour imaging, as the red passing through the optic will diffract differently to the blue resulting in colour banding at the corners of image. This is most obvious with a three-chip camera, because there will be a prism diffracting the light as well. Three-chip cameras should use a 3CCD lens to avoid this, says Williamson, which has a specific optical setup to achieve the same level of diffraction throughout the lens.

Imaging in the infrared will again have a different focal point and to be able to switch between infrared and visible requires an infrared-corrected lens, although Williamson notes there is generally not a big demand for doing this in machine vision. A standard lens will still operate in the infrared, but will have a slightly different focal point compared to working in the visible.

Working in the shortwave infrared (SWIR) band, the lensing becomes significantly more difficult due to the lack of glasses available, according to James of Edmund Optics. ‘Glasses for the visible are common, which makes designing and optimising the lens easier. It can be significantly more complicated in the SWIR,’ he says. SWIR imaging has the advantage of identifying defects that visible systems wouldn’t be able to pick out, such as bruising on fruit. In addition, certain materials, such as some plastics, which are opaque in the visible, appear transparent under SWIR lighting, meaning contents of plastic bottles can be inspected, for example.

‘There are fewer glasses available with strong indices in the SWIR to balance the lens design,’ James continues. ‘The light passing through an optic will be diffracted differently depending on the wavelength. To try and balance that with the glasses available is a difficult proposition, which ends up requiring more elements and more exotic glasses for SWIR wavelengths.’

Telecentric lenses

The lens most closely associated with machine vision is the telecentric lens, or the so-called ‘measurement lens’. Standard lenses will image from one point, so that at the edges of the field of view the camera will be looking from a slight angle. With telecentric lenses, all the light entering the lens is parallel, which means the lens delivers the same view irrespective of the distance – the view of the object is from a single point flat on. This is important when measuring an object.

‘Telecentric lenses are very useful when measuring objects with different heights or making measurements of holes,’ explains Williamson. ‘However, the lenses can be expensive as they need a lot of glass,’ – to receive all the light in parallel, the front of a telecentric lens has to be the same width as the field of view.

Closer objects will appear larger than those further away with a standard lens. A telecentric lens will flatten the image so that it doesn’t matter where the object lies in relation to the lens as to determining its size. Pins protruding from a circuit board, for instance, will all be viewed head-on with no distortions in size, which is important when making accurate measurements. In addition, because the light entering the lens is parallel, telecentric designs eliminate perspective distortion and increase the depth of focus, according to Slotwinski at Alrad.

Schott, through its subsidiary Moritex, provides telecentric lenses incorporating coaxial lighting. ‘In high-end machine vision applications, customers want to use high-power lighting to maximise the speed of inspection and the rate of throughput,’ explains Hiroaki Tomono, responsible for Schott’s machine vision business in Europe. Tomono adds that if the lens system is not designed to be used with high-power illumination, then potentially internal reflections within the optics can reduce the image quality.

Moritex began by designing and developing illumination systems for machine vision and other markets. It subsequently built on this knowledge by combining imaging optics with its illumination systems.

‘Designing the optics in combination with the illumination is a big advantage for developing high-performance machine vision lenses,’ states Tomono. Schott-Moritex has an advanced lens simulation system for designing optics, which incorporates both illumination and optical design data.

The optical and illumination axes are the same with the telecentric lenses. The light travels through the optics and reflects from the surface under inspection back through the lens to the sensor, all on the same optical axis. In this way, according to Tomono, an image can be captured without reflections from a surface blinding the camera. In addition, distortions within a telecentric lens are low because the optical beam from the lens is always parallel. Schott’s Advanced Optical Glass technology supports development for Moritex’s optics to maximise the image quality.

Designing for the application

Megapixel lenses might incorporate aspheres to make it easier to compensate for some of the optical aberrations in the system. ‘An asphere will usually allow the number of elements in a system to be reduced, thereby making the lens smaller, or allow higher resolution with the same number of elements,’ explains James of Edmund Optics. ‘However, aspheric lenses are usually more expensive than standard lenses by five or 10 times the price.’

Multi-conjugate designs are also available for high resolutions within a given focal length. A traditional fixed-focal length lens will work from 100mm to infinity, for example. A multi-conjugate version of that design would work from 100-400mm, for instance, another from 400-1,000mm, and a third from 1,000mm to infinity. ‘If you’re trying to design a lens that works from very close to very far away, you have to balance the resolution so that it works over that entire focal range, which means the resolution is not as high as it could be in any particular region,’ explains James. ‘If you have a lens that will operate just from 100-400mm, it will have a much higher resolution peak in that region, although the lens won’t resolve objects well outside of that working distance range.’

As in most cases, the application will dictate the optical specifications. ‘People often don’t value the cost of optics,’ comments Williamson at Stemmer Imaging. ‘They pay for a camera and think any lens will do. Lenses can actually solve a problem; if you get the right lens it can make your life so much easier.’

Williamson adds that using filters correctly can also make a big difference. ‘If you don’t need colour, illuminate with a single wavelength and filter out all other extraneous light,’ he says. This removes the effect of different wavelengths having different focal lengths. ‘Illuminating with one wavelength and filtering out all the other light will result in a far sharper image.’


Laser triangulation

Laser triangulation is a common technique in machine vision, used to acquire a 3D height profile of a part under inspection. The typical setup involves a camera imaging distortions in a laser line as the part moves across the line’s path.

Various optical systems are available to shape the laser light for 3D triangulation, creating light patterns, like lines, crosshairs, or grids of points. ‘The optics are typically refractive or a combination of refractive and diffractive elements, depending on the type of pattern,’ says Wallace Latimer, product manager at Coherent, which produces optics for this area. ‘You can create high intensity illumination fields that are highly structured for vision applications.’

The optical systems are made up of a prism and an aspheric cylinder lens, which distributes the light in a controlled and uniform fashion, important for machine vision. The optic converts a Gaussian into a top-hat profile, which has fairly steep edges with most of the power contained within the line. ‘Imaging systems will incorporate any illumination falloff or intensity variation into the images,’ explains Latimer. ‘This results in potentially false edges, false measurements, low resolution, or a dynamic range that’s outsides the limits of the camera. A stable illumination profile is therefore important to make accurate measurements.’

Commercial semiconductor laser diodes have a wide variation in their beam profile and divergence, says Latimer. The light therefore has to be managed relatively strictly with machine vision to get accurate measurements.

The optics for these laser diodes might be a relatively niche area for Coherent, but Latimer states that there is a lot of value in them, in getting higher accuracy and repeatability in machine vision systems.