Description
Whole-body radiography is an important procedure in standardised comprehensive mouse phenotyping pipelines, yet quantitative skeletal analysis remains largely manual and time-consuming.
We present an automated bone length quantification workflow based on DeepLabCut, a deep learning toolbox originally developed for markerless pose estimation of animals. We apply this workflow to dorsoventral radiographs from the International Mouse Phenotyping Consortium (IMPC). The model was trained on 276 randomly selected IMPC images, ensuring a representative sample of different image qualities and anatomical presentations. The full dataset comprises more than 100,000 X-ray images. To address the substantial variability in image quality and anatomical orientation, we implemented a pre-processing pipeline that combines multiple image enhancement steps with automated rotation to ensure consistent anatomical alignment across all X-rays.
Projected bone lengths were calculated as Euclidean distances between predicted anatomical landmarks. An interactive Streamlit-based application allows flexible image selection, visualisation of predictions, cohort-based statistical analysis, and manual quality control of landmark predictions. To further improve model performance, an active learning strategy is planned, focusing on the selection of new training samples that are maximally diverse. This iterative approach aims to increase the robustness of landmark predictions.
Initial results suggest that the workflow enables reproducible and sensitive measurements of projected bone lengths across large datasets, offering a promising approach for quantitative skeletal phenotyping and phenodeviance detection in mouse models.