Deep Learning für die 3D-Rekonstruktion hochporöser Strukturen aus FIB-REM-Bildstapeln
© Fraunhofer ITWM
Deep Learning für die 3D-Rekonstruktion hochporöser Strukturen aus FIB-REM-Bildstapeln

#

We use deep learning for reconstructing porous material's nano-structures from FIB-SEM images.

Deep Learning for 3D Reconstruction of Highly Porous Structures From FIB-SEM Nano Tomograms

Better Understanding of Structure-Property Relations by 3D Imaging and Deep Learning

Modern materials such as gas diffusion layers for fuel cells, electrodes for lithium-ion batteries, filter media or ceramic materials with active components have complex, multiscale structures that highly influence the macroscopic material behavior. 3D images of the structures provide a deeper understanding of how structure and properties are related. We contribute to this understanding with new deep learning methods.

On the nanoscale, structures in the 5-100nm range can be imaged three-dimensionally using the FIB-SEM serial sectioning technique. Using a Focused Ion Beam (FIB), the structure of interest is precisely cut and the cut surface is then imaged using a Scanning Electron Microscope (SEM). The surface is further ablated, and the cut surface is imaged again. Several hundred sectional images result in a volume data set.

Zirconia Sample
© Sören Höhn, Fraunhofer IKTS
Figure 1: SEM image of a nanoporous zirconia sample.
Random pack of cylinders
© Fraunhofer ITWM
Figure 2: Example of synthetic SEM image data from realizations of stochastic geometry models; random packing of cylinders.
CoxBoole model of spheres.
© Fraunhofer ITWM
Figure 3: Example of synthetic SEM image of realizations of stochastic geometry models; CoxBoolean model of spheres.

Shine-through Artifacts Complicate the Reconstruction

In the case of high porosity, however, the 3D structure reconstructed from the 2D sections does not correspond to the real structure, since the individual SEM images do not only show the actual cut surfaces. The high depth of field of the SEM also allows structural areas behind it to become visible through the pores and appear just as bright as the actual cut surface. This effect of solid structures being visible through the pores from deeper layers is called shine-through artifact. The reconstruction of the 3D structure and its quantitative analysis are therefore challenging for porous structures. Several reconstruction methods have been developed at ITWM. However, they are still tailor-made for specific structures and contrast ratios, not easy to parameterize, and susceptible to typical imaging artifacts such as streaks due to cutting or charging in the case of non-conductive materials.

Sketch of the U-Net3D architecture used
© Fraunhofer ITWM
Figure 4: Sketch of the U-Net3D architecture used; the gray boxes represent convolutional layers that follow batch normalization and are activated with ReLU. red: Max-Pooling-Layer; blue: corresponding UpSampling-Layer. yellow: Convolutional Layer with Soft-Max activation.

Deep Learning as a Solution Approach for Complex 3D Segmentation Tasks

Machine learning is increasingly used to solve complex segmentation tasks, including 3D images. However, Convolutional Neural Networks (CNN) need many correctly segmented (here reconstructed) images for the training phase. We therefore trained a CNN (U-Net3D) using only synthetically generated FIB-REM images. U-Net3D consists of two paths, the first one – the encoder – resembles a classical CNN and all convolutional layers are activated with Rectified Linear Unit (ReLU) (see Figure 4).

Once the thinnest layer – the bottleneck of the network – is passed, the symmetric second path expands the feature maps again. In the bottleneck, a dropout layer with probability 0.5 is used as an indirect augmentation. The expanding path allows precise localization in the original image dimensions by using UpSampling layers instead of Pooling layers and linking the output to the corresponding layer in the encoder. Finally, an additional Convolutional Layer with soft-max activation is applied to obtain probabilities of class membership. Figure 4 contains a sketch of our architecture including the number of layers.

Training With Synthetically Generated Images

So we create a geometry, e.g. of overlapping spheres. This sphere system is discretized directly. The resulting 3D image is immediately available as ground truth. Second, a stack of SEM images is simulated in a physically correct way. Figures 2 and 3 showed two examples of synthetic SEM image data of realizations of stochastic geometry models. With this data, the CNN trains to decide which pixels actually belong to the foreground, i.e., to the material. We then presented FIB-REM stacks of real structures to this CNN trained solely on synthetic data. It segmented two real data sets as well as the best tailored classical methods. Figure 5 shows the reconstruction result of the trained model for the nanoporous zirconia sample. The next development goal is to train the CNN using synthetic FIB-REM images of many different structures. This CNN could then also correctly segment new, unknown structures.

Model for the nanoporous zirconium dioxide sample
© Fraunhofer ITWM
Figure 5: Reconstruction result of the trained model for the nanoporous zirconia sample; green: correctly segmented foreground pixels (true positive); yellow: pixels incorrectly classified as foreground (false positive); red: pixels incorrectly classified as background (false negative).

References

  • Fend, C.; Moghiseh, A.; Redenbach, C.; Schladitz, K.:
    Reconstruction of highly porous structures from FIB-SEM using a deep neural network trained on synthetic images.
    Journal of Microscopy, 281, Nr. 1, pp. 16-27, 2021.