Lighting the way for astronomy

Share this on social media:

Sydney Institute for Astronomy researchers have developed a new type of wavefront sensor that leverages a ‘photonic lantern’ and deep learning. (Norris et al.)

Researchers have developed a wavefront sensor using a ‘photonic lantern’ that could optimise humanity’s viewing of the cosmos

Adaptive optics have become critical to advancing modern astronomy due to their ability to counteract the rapid blurring effects caused by Earth’s turbulent atmosphere. They are key to enabling starlight to be captured clearly by telescope sensors in, for example, the imaging of exoplanets, newly forming planetary systems, dying stars and active galactic nuclei.

These optics also offer advantages in fields where any type of distorted media hinders the detection and/or manipulation of the desired optical signal, such as free-space optical communications, remote sensing, in-vivo imaging and the manipulation of living cells. 

For astronomy, adaptive optics consist of a deformable mirror situated at the telescope pupil plane, which rapidly applies corrections to incident wavefronts, cancelling out the effect of atmospheric turbulence. Modern deformable mirrors consist of thousands of electrically driven actuators, each applying a small deformation to the mirror surface on timescales of milliseconds. The performance of such systems therefore depends largely on how accurately the current state of the wavefront is known, a task usually accomplished by a wavefront sensor (in conjunction with various reconstruction algorithms).

However, according to researchers from the Sydney Institute for Astronomy in Australia, current adaptive optics systems are limited by their wavefront sensors. They explain this is because, in addition to the sensors needing to be in an optical plane non-common to the science image, they are also insensitive to certain wavefront-error modes. 

The researchers have therefore developed a wavefront sensor leveraging a ‘photonic lantern’ and deep learning that can be placed at the same focal plane as the science image. 

Persisting problems

Adaptive optics systems have conventionally used a wavefront sensor positioned in a separate pupil plane rather than at the image plane. This is because while the goal of these systems is to produce the optimal image in the instrument’s focal plane, the current state of the wavefront cannot easily be determined from the focal-plane image alone. This is due to the measured image (usually obtained by a CCD or CMOS sensor) containing information only on the intensity of the beam, rather than the phase information crucial to measuring the incident wavefront. However, according to the researchers, such systems solely using pupil-plane wavefront sensors have some significant disadvantages.

‘Firstly, they are subject to non-common path aberrations – differences between the wavefront seen by the wavefront sensor and that used to make the image – due to the non-common optical components traversed by the wavefront-sensing and science beams,’ explained Barnaby Norris, lead author on the Nature Communications paper describing the developed wavefront sensor. ‘Since these aberrations are not seen by the wavefront sensor, they are not corrected, and this is currently the main limiting factor in the performance of high-contrast, extreme-adaptive optics systems in astronomy.

‘Another major disadvantage is that there exist some highly detrimental aberrations to which pupil-plane wavefront sensors are insensitive,’ Norris added, ‘specifically the so-called “low wind” and “island” effects. ‘This arises due to phase discontinuities across the secondary-mirror support structure in the telescope pupil, exacerbated by thermal effects that these structures create when the wind is low,’ he said. ‘Since this takes the form of a sudden step in phase across a region obscured by the mirror support structures in the pupil plane, they are virtually invisible to a pupil-plane wavefront sensor. However, they have an extremely strong effect in the image plane, and are also a limiting factor in the performance of adaptive optics systems.’

For these reasons, a focal-plane wavefront sensor (FP-WFS) has long been desired. Placing the wavefront sensor at the focal plane, rather than at a non-common pupil plane, eliminates non-common path error and enables sensitivity to wavefront errors not visible in the pupil plane.

However, as previously noted, the image at the focal plane does not contain sufficient information for wavefront reconstruction, since it contains only intensity information and lacks the phase component, leading to degeneracies.

Previous FP-WFS designs have relied on introducing further perturbations to the wavefront to break such degeneracies, as well as linear approximations (so unsuited to large phase error) or slow, non-real-time methods. They are also poorly suited to injecting the image into single-mode fibres, which is extremely important for major science goals, such as the spectrographic characterisation of exoplanet atmospheres.

Overcoming limitations

The scientists have therefore developed a type of FP-WFS that directly measures the phase, as well as the intensity, of the image, without any linear approximations or active modulation. 

Their FP-WFS uses a monolithic photonic mode converter known as a ‘photonic lantern’ to determine the complex amplitude of the telescope point-spread function (the two-dimensional distribution of light in the telescope focal plane). This is done via the conversion of multi-modal light into a set of single-mode outputs (see figure 1). The desired wavefront information can then be determined by simply measuring the intensity of each of the single-mode outputs, which are also ideal for injection into a high-dispersion, diffraction-limited spectrograph – ideal for exoplanet characterisation – via single-mode fibres.

Figure 1a) Schematics of a multi-core photonic lantern showing how the phase and intensity of the input field into the multimode fibre end-face evolve into an array of uncoupled, single-mode cores with different intensities. b) The results of three simulations demonstrating the concept of the photonic lantern wavefront sensor, and its ability to measure both amplitude and phase. (Image: Norris et al.)

The photonic lantern wavefront sensor (PL-WFS) addresses the limitations of previous FP-WFS designs by placing the multimode region of a photonic lantern at the focal plane, which deterministically remaps the combination of mode-fields in the multimode region to a set of intensities produced at several single-mode outputs.

Since the modes excited in the multimode region are a function of both the amplitude and the phase of the incident wavefront, non-degenerate wavefront information is contained and the wavefront can be reconstructed. The researchers achieve this reconstruction using a deep neural network. 

‘These deep learning methods have recently exploded in popularity across many fields of science and engineering,’ remarked Norris. ‘In essence, a neural network learns the relationship between the inputs (wavefront phase) and outputs (the intensities of the single-mode core lantern outputs) of the system. Then, given a new, previously unseen set of outputs, it can infer what the input is.’ 

Such methods can perform the required inferences extremely quickly, according to the researchers, with currently available frameworks able to perform highly complex, true nonlinear inferences with sub-millisecond latency. 

Putting it to the test

Simulations validated the principle of the PL-WFS, and laboratory demonstrations confirmed its operation. 

For instance, the researchers accurately reconstructed a distorted input wavefront with more than 180-degree variation in phase across the field from an intensity-only focal plane measurement using the PL-WFS, to a precision of better than half a degree. Under laboratory conditions it was proven that this new type of wavefront sensor can successfully correct for a range of aberrations typically produced by distorted media. Furthermore, (and more importantly, according to the researchers), this novel sensor shows similar or better performance than standard pupil plane WFS while avoiding all their drawbacks related to non-common path aberrations.

The researchers are now looking to use the device in a closed-loop confi guration wherein wavefront errors are corrected in real time, and introduce wavefront errors using a basis more similar to that of a turbulent media. Following that, the device can be tested in an on-sky deployment at an astronomical telescope. Eventually the researchers plan their PL-WFS to form a key component in the increasingly complex set of sensors within modern adaptive optics systems, paving the way for advanced imaging and characterisation of exoplanets, their atmospheres and surface composition, and the detection of biological signatures.

Further information
This article summarises some of the information presented in Nature Communications 11: ‘An all-photonic focal-plane wavefront sensor’: Barnaby R. M. Norris, Jin Wei, Christopher H. Betters, Alison Wong & Sergio G. Leon-Saval: www.nature.com/articles/s41467-020-19117-w

--

 

Featured product: Shack-Hartmann wavefront sensors from Imagine Optic

For testing complex optical systems

Based on 25 years expertise in wavefront sensing, Imagine Optic have developed the Haso family of Shack-Hartmann wavefront sensors to serve a wide range of optical metrology applications covering wavelengths from the UV all the way out to the SWIR.

Haso wavefront sensors are robust, high accuracy, versatile tools offering real time measurement up to several kHz, independent measurement of wavefront and intensity, and wide dynamic range. They are suitable for any beam profiles, and are insensitive to vibrations. The sensors can be used to accurately measure beam distortion, curvature and wavefront aberrations introduced by optical components such as lenses. In addition MTF and PSF can be measured. 

Haso wavefront sensors can be used as standalone devices but can also be coupled with an R-Flex collimator and light source to form a turn-key system that can be used to characterise optical components. Haso wavefront sensors are optimised for both production and R&D environments and can be used in optics quality control, in optical system alignment, and in surface measurement. 

Further information: www.imagine-optic.com/products/metrology