The changing face of microscopy

Share this on social media:

Light microscopy is an old tool, but now a range of new technologies and innovative applications have pushed the boundaries of what can be done with light microscopy. Stephen Mounsey looks for these developments in some unexpected places

When we think of light microscopy, we think of slides of tissue samples placed carefully under an instrument, while a diligent researcher in a white coat peers down the eyepiece and makes notes. We usually think of it as a technique for the life sciences. However, these perceptions are becoming ever more distant from the exciting reality of light microscopy today. First, virtual microscopy allows the researcher to be on a different continent to the microscope, and at a different point in time. Second, advances in adaptive optics now allow microscopes to overcome the limits of aberrations, meaning that clear, crisp images can be formed through even the murkiest of media. Thirdly, light microscopy has graduated to become an important measurement technology, able to image in 3D and even look around corners.

Just around the corner

Confocal microscopy is one of the more advanced light microscopy techniques available, and is routinely used in laboratories to view and illuminate a certain focal plane without illuminating the material above and below it. This is useful to in-vivo biological imaging applications, as it can enable users to image through several millimetres of tissue. However, the utility of confocal microscopy has been proven worthy in application areas other than the life sciences, within materials science and engineering in particular.

Professor Richard Leach, principal research scientist at the UK’s National Physical Laboratories, is leading a team working to standardise some of those processes, particularly those pertaining to examining material surfaces. ‘At the moment, most people are still in the realms of measuring surface roughness by dragging a stylus across the sample surface, and reporting a Ra value or something like that,’ says Leach, noting that Ra is standard notation for surface roughness. ‘That’s how most industries still measure surfaces, but the trend today is to go towards a more functionally significant understanding of the surface. If you, for example, want to affect the fluid flow across the surface, or the tribology, or the way that light interacts with the surface, then you might want to impart some sort of three-dimensional structure onto the surface, as opposed to just trying to make the surface as flat as possible. The 3D patterning, or the 3D structuring – even in a random sense – is therefore becoming hugely important. More and more industries are starting to realise that you can seriously affect the function of a device or component by changing its surface.’ Currently, surface profiling is done with a stylus, says Leach, but this may take many hours and the contact of the stylus may damage the surface. ‘Where people were using stylus instruments before, they’re starting to use optical systems, and confocal is one of the systems they’re looking at.’

Confocal systems provide a way of profiling the surface, alongside other optical techniques such as white light interferometers (also known as coherent scanning interferometers), focus variation systems, point autofocus, or other systems. ‘All of these optical systems use the principles of light microscopy,’ explains Leach. ‘They use some kind of objective lens, and gather scattered light. Because you’re bouncing light off the surface, there are a lot of limitations. The way the light reflects limits the slope angles you can detect, and the ability to detect sharp edges is very limited. We think of a stylus instrument as a reference instrument in the sense that it’s like rolling a ball across the surface; as long as you know the size of the ball – of the stylus tip – you can understand what your stylus is doing. This is much more difficult with an optical instrument, as you have to solve Maxwell’s equations, which is more difficult than working out what it’s like to roll a ball across a surface,’ he says, noting that this explains why optical systems often give different results to those obtained by the stylus measurement. This is why the NPL is working towards these calibration standards and good practice guides, which will be released in the near future. The group is also in the process of developing 3D calibration objects that can be used to check the angular and length responses of the instruments. The group is using an Olympus Lext OLS4000 confocal microscope, as it is a typical example of a confocal microscope.

Professor Richard Leach of the UK’s National Physical Laboratories uses an Olympus Lext OLS4000 confocal microscope in his project to create standards for characterising surface roughness.

Despite the progress with calibration, the problem remains a largely mathematical one: ‘Even though you can calibrate the x, y, and z axes for one of these instruments, that doesn’t really mean you can measure the parameters of a complicated surface with any degree of confidence,’ explains Leach. ‘We need to measure the transfer function of the instrument, which is very difficult and which essentially means solving an inverse problem. We have to go backwards from what the microscope measures in order to find out what the surface should look like, removing the optical artefacts on the way. Nobody has ever been able to correct that, but we think we can do it.’

If the team is indeed able to iron out the kinks in this process, the implications could be far-reaching. So far, says Leach, the maths and computation problems have almost been solved for the linear case. Non-linear behaviour, however, is more challenging, but even more promising: ‘If we can solve the non-linear problem, which means looking at the scattered light from the surface, then we can start measuring things that other people believe can’t be measured,’ says Leach, giving the examples of vertical side walls and undercut structures. ‘It’s like digging a hole in the floor and seeing a cave open up beneath you. As long as some of the scattered light goes up the aperture, then you can do it.’ Applications would exist in the microfluidics industry, says Leach, as this sort of technique would allow the side walls of a channel to be profiled.

Adaptive optics

Elsewhere, comparable optical ingenuity has been applied to the problem of correcting the aberrations encountered when imaging through a variable medium, such as the air or a liquid. Adaptive optics has been used in astronomy for some time, as it provides a way of compensating for the disturbances in the atmosphere. Michael Feinberg, director of product marketing at Boston Micromachines, explains how the company has applied its expertise in adaptive optics to microscopy: ‘We create MEMs deformable mirrors used for aberration correction. The devices are used to correct for anything in the optical path that is causing the image to become unclear due to misalignment or temperature differences.’

In the adaptive optical system, explains Feinberg, the deformable mirror acts as a wavefront corrector, and a wavefront detector or sensor must also be used: ‘It’s like a camera with modifications, able to detect what type of aberrations are happening between when the light leaves the [imaging device] and when it comes back. The light comes in, the wavefront sensor detects the aberration, and then using a control algorithm, it then commands the mirror to make an adjustment.

‘Using a feedback loop, the system comes to a point where the aberrations are mostly removed, and a more detailed image can be obtained.’

One of the main applications for the system is in retinal scanners, as the adaptive optical component allows a crisp, microscopic image to be formed despite the aberrations of the fluid in the eye. ‘The application is to look through a live eye at the retina at the back – at the photoreceptors – to see if there’s damage there and to hopefully detect any diseases of that kind,’ he says, adding that age related macular degeneration (AMD) is one such disease, alongside diabetic retinopathy (blindness caused by diabetes). Devices based on the mirror at the stage of pre-clinical trials in various research labs, including UC Berkley, University of Rochester, and Indiana University. Before the introduction of this adaptive optical approach, Feinberg says, there was no comparable solution: ‘Researchers couldn’t get clear enough images to see what they really wanted to see. They were looking for something that could enable that type of imaging, and luckily we could provide the enabling component to achieve that.’

Virtual microscopy

Hamamatsu has introduced a microscopy system, branded the NanoZoomer, able to digitise whole microscopy slides at very high resolution. Paul Cormack, sales engineer for the device at Hamamatsu, explains the utility of his company’s system: ‘In the past, people have taken a snapshot of areas of interest on a pathology slide, and emailed that to someone; the image is limited to a resolution of so many pixels by so many pixels. In the case of the NanoZoomer device, we’re actually digitising the whole slide at high resolution, so that the viewer can zoom in to areas and structures of interest on the slide, as you would do with a microscope.’ The system offers the equivalent resolution of a 40x plan apochromatic objective lens – a research grade microscope objective.

‘There are a number of different application areas for the technology,’ explains Cormack. ‘It creates what we call a digital slide, or a virtual slide, meaning that the user sees the same image in the screen as they would get if they projected the microscope image onto the screen, but it’s an electronic file.’ Cormack notes that digital slides can be used in teaching: ‘You can create multiple identical copies of one slide, and distribute these over a network to allow anytime, anywhere learning. The slides are of sufficient quality for making a diagnosis, and can easily be used in multidisciplinary team meetings.’ Teaching aside, the applications are mainly in the field of histopathology – identifying diseases by studying cell samples. ‘In the standard system we use halogen type lighting, but it is feasible to use other types of light source as well,’ he says, noting that fluorescence imaging is a possibility, and with it a range of applications in materials science.

Retinal images such as this are crucial for identifying diseases such as macular degeneration and diabetic retinopathy. Principles of light microscopy and adaptive optics allowed for improvements in the resolution of such scans.

The system relies on the use of a line scan camera coupled with the time-delay and integration (TDI) technology perfected by Hamamatsu. TDI is a method of reading high resolution data from a CCD sensor at high speed by synchronising the scanning speed of the system with the rate at which bits of information exit the CCD. ‘It’s really more akin to a machine vision application, with a continuously robotic stage and the TDI line-scanner digitising the image as it’s moving. One of the advantages of TDI is that you can go very quickly, while still getting a very high signal-to-noise ratio. It also works in low light applications, such as fluorescence imaging, and we use the same sensor for both bright field and fluorescent images.’ Cormack explains that the TDI approach to line-scanning is capable of even greater speeds than are currently offered by the NanoZoomer: ‘Currently, the bottleneck for us is actually computer technology; standard PCs just can’t handle the volume of data generated in a period of time, so we’re a bit constrained by PC technology at the moment, but as that develops our systems will get faster.

Although offering good scanning speed, TDI is a difficult process to implement, as Cormack says: ‘One of the issues with TDI technology is that you have to synch it very precisely with sample movement, otherwise you get blurring of the image,’ he says. Recognising this difficulty, Hamamatsu has begun offering products to allow TDI implementation in OEM applications: ‘We’ve created what we call a TDI engine, which is a robotic stage with light sources that can be specified by the customers, and a TDI sensor. It’s literally the skeleton of the NanoZoomer, but with options to change the light sources or to use, for example, multi-well plates,’ he says.

The techniques of virtual microscopy have the potential to revolutionise the way histopathology is taught and undertaken. Confocal microscopy for surface profiling is also a non-typical application of the principles light microscopy, as is retinal imaging. It seems that only time will tell what the next step for light microscopy will be.