The Fraunhofer Institute for Laser Technology ILT has installed a new high performance computing cluster within its Centre for Nanophotonics. The cluster will allow researchers to simulate laser-based production processes across a wide range of time and length scales, including novel techniques in micro- and nanophotonics.
Variables for industrial processes are difficult to optimise through direct measurements in the micrometre-scale process zones owing to the tiny dimensions and very high temperatures involved. Computer simulations therefore offer an attractive alternative for those looking to optimise performance; simulations are easier to automate and often more cost-effective than experiments, while also allowing fluctuations and measurement uncertainties to be excluded or specifically taken into account. Additionally, simulations of laser-based production processes tend to be multi-scale problems, in which a large expansion of the component has to be calculated at a very high resolution. While this would be difficult to account for in an experimental set-up, it is easy to model in silico.
According to the ILT, macro processing applications such as steel plate cutting are also increasingly frequently on simulation-based methods in order to control small-scale effects and expand the process limits. To optimise expulsion of the molten metal during laser cutting, for example, boundary layer phenomena of ultrasonic gas flows in the kerf are analysed in detail.
The required large number of grid points in the simulations exceeded the capacity of conventional workstations in terms of processing time and storage space, and so the institute required a purpose built supercomputer for these applications. The funding provided by the state of North Rhine-Westphalia for the new Centre for Nanophotonics in Aachen allowed the Fraunhofer ILT to build a cluster capable of handling simulations of these multi-scale tasks. The final stage of the high-power computer system was installed and started up in November 2010.
The cluster is based on a heterogeneous architecture consisting of both multi-core processors and nodes based on the Nvidia Cuda framework - a system that allows parts of the calculations to be performed on specialised graphics processing cards (GPUs). This modern concept is particularly suitable for the massively parallel execution of frequently recurring calculation steps.