Reach for the stars

Share this on social media:

Nick Morris finds photonics products are helping scientists better understand the universe around us

Although the basic optical theory and design of most telescopes deviates little from those used in the 17th century, the precision to which they can be built is orders of magnitude greater, driven primarily by continuous advance in precision optics and associated photonics products, produced by companies such as Optical Surfaces, which makes many components to be used in astronomical instruments, such as ultra precise mirrors, prisms and aspherical optics. Advances in optical design software from companies like Lambda Research give astronomers the tools with which to model their devices.

The quality of ground-based observations has a major drawback: many images are blurred and smeared by turbulence in the atmosphere. If you have ever seen a mirage on the horizon on a hot summer's day you will have seen this distorting effect first hand. The resolution achievable with traditional telescopes is limited by this atmospheric turbulence. Space-based observatories, such as the Hubble Space Telescope, do not suffer this problem, but they are extremely expensive to construct (to the order of $1.5bn), and when things go wrong, repairs can be extremely awkward, and again, very expensive. Day-to-day running, servicing, and other expenses relating to the Hubble telescope cost NASA $230-250m a year.

Scientists now have at their disposal a number of solutions to the problems associated with ground-based astronomy. These include adaptive optical systems, visible light interferometry and selective imaging. All these methods rely heavily upon advanced optical systems and precise imaging techniques.

Adaptive optics measures distortions caused by turbulence in the atmosphere and alters the shape of the telescope mirror to compensate for and correct these distortions. One such example is the Laser Guide Star adaptive optics system, developed by a team at UCLA in the United States. This system uses a laser to project a 'virtual star' image into the field of view of the telescope. The telescope 'sees' the image of the star as distorted by atmospheric turbulence. Variations in the image of the virtual star are tracked by the adaptive optics system, which compares the original, undistorted image with that seen by the telescope. The adaptive optics varies the shape of a mirror within the telescope until the initial and resultant image of the virtual star match. At this point the telescope can see clearly through the distorting atmosphere to the stars beyond.

'We have worked for years on techniques for beating the distortions in the atmosphere and producing high-resolution images,' said Andrea Ghez, professor of physics and astronomy at UCLA.

The laser virtual star technique has already been used to good effect to peer into the very centre of our galaxy. The system was fitted to the 10m Keck II telescope at the W.M Keck observatory in Hawaii. The telescope's gaze was trained towards the centre of the Milky Way, targeted on the so-called 'super massive' black hole that is thought to lie there. Ghez and her team have been able to study material as it falls towards the black hole. They hope to use their observations to understand how the black hole grows, and in so doing, better understand the history of the galaxy.

To see fine detail of celestial bodies, such as the surface of stars, you need to use a telescope with a large aperture, with a size in the order of tens of metres. However, it is very difficult to engineer telescopes of this size: to build a rigid mirror of an appropriate size is almost impossible, not to say prohibitively expensive. Images obtained with such a large telescope would also fall foul of atmospheric turbulence, as described above. Scientists working at Cambridge University in the UK first encountered this problem working with radio frequency telescopes. They developed a method by which they could use a number of smaller telescopes to imitate a single large telescope, a method called aperture synthesis. In this method scientists record the signals from a number of small telescopes as they are moved relative to each other and by the rotation of the earth. A computer is used to record and collate the different signals. Images are created by analysis of the interference between different signals; hence this method of data extraction is called interferometry.

This method is widely used in radio astronomy, and is now being applied to optical astronomy. The precision involved in constructing an aperture synthesis telescope for visible light is much greater than that of a radio telescope, due to the relatively tiny wavelength of the visible portion of the electromagnetic spectrum as compared with the wavelengths used for radio astronomy.

This precision demands meticulously designed optical systems – in order for the light from different telescopes to provide useful interference, each beam must travel exactly the same signal path length. In the Cambridge Optical Aperture Synthesis Telescope (COAST), this is carried out by a system of trolleys carrying mirrors that move along a precisely arranged system of parallel tracks. Lasers track the exact position of each mirror, which is adjusted by an electromagnetic coil mounted to each mirror. This allows minute adjustments to be made independently to the position of each mirror in the beam path. From here, the signals are combined and then recorded by a CCD detector.

Aperture synthesis telescopes allow astronomers to see the sky in much more detail than other methods, but it is still extremely difficult to see faint objects directly, such as extra-solar planets, planets orbiting other stars. Astronomers estimate that there could be as many 10 billion planetary systems in our galaxy alone – yet, to date, only a few hundred have been discovered. So far astronomers have been able to infer the existence of hundreds of such planets by tracking the changing intensity of the light from the mother-star, during 'transits'. These occur when a planet passes in front of its parent star, temporarily obscuring some of its light. This can be detected from the earth as a slight dimming of the star's luminosity. The dimming can be as little as one per cent of the total intensity. Other phenomena can also cause changes in the star's luminosity, so extremely accurate measurements are needed.

SuperWASP uses an array of Andor cameras.

SuperWASP (Wide Angle Search for Planets) is an extra-solar planet detection programme run by a consortium of eight academic institutions: Cambridge University, the Instituto de Astrofisica de Canarias, the Isaac Newton Group of telescopes, Keele University, Leicester University, the Open University, Queen's University Belfast and St. Andrew's University. SuperWASP-North is an observatory on the island of La Palma, off the coast of North Africa, located in the Isaac Newton Group of telescopes. It consists of eight Andor iKon-L large-area CCD cameras. Each houses a vacuum-sealed, TE-cooled E2V 42-40 sensor with 2kx2k pixels. The system has a wide field of view – 2,000 times greater than a conventional astronomical telescope. The cameras continuously photograph the night sky, each camera capturing up to 50,000 stars per image (this many are needed to stand any chance of detecting transiting planets). This amounts to more than 40Gb of observational data per night. Once sufficient observations have been made (over several months) searches for changes in brightness, which might indicate the presence of a planet, can be made.

However, it is very difficult to see extra-solar planets directly: the star and the orbiting planet occupy a very narrow angular region, and the light from the star can be up to a million times brighter than the light reflected from the planet. This means that to viewers on earth the light from the star swamps any that may be reflected from the planet.

To overcome this effect, astronomers used a method called coronagraphy. This is a method by which the light coming directly from the star is blocked out to allow objects close to it to be seen more clearly. This simplest way of doing this is by placing a mask at some point in the telescope beam path. Unfortunately, due to adverse diffraction effects, this method cannot reveal features close to the star. Another, much more promising method is to use interferometry, where unwanted light is rejected by destructive interference. This is the method used by the Achromatic Interfero Coronagraph, developed at the Observatoire de la Côte d'Azur. This uses a Michelson-Fourier interferometer (a device that splits the light beam into two parts, which travel different path lengths before being recombined) with a 'cat's eye' lens in one arm. This inverts one half of the beam, and puts the two halves out of phase with one another. This has the effect of eliminating on-axis light, while preserving the rest of the signal. So, when the AIC is pointed directly at a star, the light of the star will be removed from the resulting image, allowing astronomers to see objects, such as planets, in the field of space around the central star.

The team working at the Observatiore de la Côte d'Azur use a Jade SWIR camera from Cedip to calibrate and refine the AIC. When testing the system in the lab they were able to use data gathered using the Jade SWIR to detect small optical quality defects within the AIC. They were also able to provide constraints for internal adjustments and insertion in the optical train of the system.

A group working at the Institute of Astronomy at Cambridge University has developed a new system of ground-based observation that can acquire images of quality approaching those obtained with the Hubble telescope without adaptive optics, and at a vastly reduced cost. The method, called 'Lucky imaging', uses fast CCD detectors to take many pictures. Low quality images are rejected, leaving the best photographs obtained to be combined to create the final image. Although rejecting a number of images represents a loss in light gathering efficiency, the high resolutions obtained make up for this.

Until recently such an approach would not have been possible, as the LuckyCam system relies upon being able to capture extremely faint signals of only a few photons in some cases. Such a method requires detectors that are almost perfectly efficient, and virtually noiseless. Images must be taken very quickly to prevent dynamic blurring. However, increasing the readout speed of a CCD camera usually has to be traded off against an associated increase in readout noise. Such noise could mask the very faint objects that the astronomer is trying to see.

The Cambridge team uses a new generation of CCD cameras, called Low Light Level CCDs or L3CCDs, developed by e2v technologies. These have a noiseless electron multiplication stage, which allows even very faint signals to be amplified without a proportional increase in signal noise. These detectors were first developed for use in surveillance and security applications and night at in other environments where the amount of available light was limited. L3CCDs are sensitive enough to detect the faintest stars in the sky while operating at the very high camera speeds used for Lucky imaging.

Advances in optical technology give astronomers the tools to view the skies in ever-greater detail, and in so doing to better understand the universe in which we live. Eventually observations may help scientists understand how the universe has evolved, and what its fate might be.