Description
Three-dimensional X-ray virtual histology (XVH) techniques offer a powerful alternative to traditional 2D histology by enabling non-invasive, volumetric imaging of biological tissues without the need for time-consuming sectioning or staining. However, such XVH approaches inherently produce greyscale images characteristic of X-ray tomography and lack the biochemical specificity provided by conventional histological stains. In parallel, the field of digital pathology has seen the emergence of deep learning-driven ‘virtual staining’, where neural networks simulate the effects of histological dyes on unstained optical images. In this study, we bridge these two domains by applying cross-modality image translation techniques to generate artificially stained X-ray 3D virtual histology data. As part of our multimodal correlative characterisation studies of biodegradable metal implants in bone, we have performed multiple synchrotron-based x-ray micro-CT and toluidine blue histological measurements sequentially on the same samples. We co-registered more than 50 sets of micro-CT and histology image pairs, which were then used for training on a patch-based modified cycleGAN model network suited for limited paired data. Outputs were successfully optimised to replicate the colours present in the histology images whilst retaining a majority of high contrast and resolution features. With this model, results were measured to be superior to both Pix2Pix and standard cycleGAN outputs, based on combined metrics of mean square error, structural similarity, and Frechet inception distance. Once trained, the model can be applied to an input stack of X-ray CT slices to rapidly generate a virtually stained 3D dataset, offering up new possibilities for multimodal correlative tissue analysis.