Fully Automated 3D Framework Accurately Maps Fruit Microstructure

Applied to apple and pear fruit, the model achieved very high accuracy, outperforming previous 2D approaches and traditional algorithms.

The 3D structure of plant tissues underlies vital metabolic processes, yet traditional microscopy methods demand extensive sample preparation and offer only small fields of view. X-ray micro-CT has recently enabled non-destructive 3D imaging of plant samples, but quantifying tissue morphology remains complex due to overlapping features and low image contrast. Existing segmentation techniques often fail to separate parenchyma cells, vascular tissues, or stone cell clusters. Recent advances in deep learning have transformed image analysis in medicine and biology, suggesting new opportunities for plant research. Due to these challenges, a deep learning–based approach is needed to achieve accurate, automated 3D segmentation of plant tissues from native X-ray micro-CT images.

A study (DOI: 10.1016/j.plaphe.2025.100087) published in Plant Phenomics on 5 July 2025 by Pieter Verboven's team, KU Leuven, provides the first fully automated framework for labeling and quantifying plant tissue architecture, paving the way for faster and more precise studies of plant physiology and storage behavior.

The research employed a 3D panoptic segmentation framework built upon the 3D extension of Cellpose and a 3D Residual U-Net to achieve complete labeling of fruit tissue microstructure from X-ray micro-CT images. The model simultaneously performed instance segmentation-predicting intermediate gradient fields in X, Y, and Z to separate individual parenchyma cells-and semantic segmentation to classify voxels into cell matrix, pore space, vasculature, or stone cell clusters. It was trained on apple and pear datasets with synthetic data augmentation involving morphological dilation and erosion, grey-value assignment, and Gaussian noise addition, and benchmarked against a 2D instance segmentation model and a marker-based watershed algorithm.

Evaluation using Aggregated Jaccard Index (AJI) and Dice Similarity Coefficient (DSC) showed that the 3D model outperformed all previous approaches, reaching AJIs of 0.889 for apple and 0.773 for pear, compared with 0.861/0.732 for the 2D model and 0.715/0.631 for the benchmark. The model segmented pore spaces and cell matrices almost perfectly and successfully identified vasculature (DSC 0.506 in apple; 0.789 in pear) and stone cell clusters (IoU 0.683; DSC 0.810; precision 0.798; recall 0.836). Visual validation confirmed accurate detection of vascular bundles in 'Kizuri' and 'Braeburn' apples and smooth, realistic segmentation of stone cell clusters in 'Celina' and 'Fred' pears (DSC up to 0.90). However, additional data augmentation and targeted subsets did not enhance performance, likely due to dataset imbalance and domain shifts. Morphometric analysis further validated model accuracy, with vasculature widths ranging 70–780 μm and stone cell clusters showing variable dimensions and sphericity (0.68–0.74). Overall, the 3D deep learning model provided the most complete, automated, and contrast-free approach for quantifying plant tissue microstructure to date.

This 3D deep learning–based model provides plant scientists with a powerful, non-destructive tool for studying how microscopic structures influence water, gas, and nutrient transport. It can drastically accelerate "human-in-the-loop" analysis, reducing manual labor while improving accuracy in tissue characterization. In fruit research, the model helps reveal how cellular arrangements determine texture, storability, and susceptibility to physiological disorders such as browning or watercore. More broadly, the technology offers a scalable framework for studying tissue development, ripening, and stress responses across diverse crops. Its compatibility with standard X-ray micro-CT instruments makes it an accessible solution for integrating artificial intelligence into plant anatomy and food science research.

Source:
Journal reference:

Van Doorselaer, L., et al. (2025). Panoptic segmentation for complete labeling of fruit microstructure in 3D micro-CT images with deep learning. Plant Phenomics. doi.org/10.1016/j.plaphe.2025.100087

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoLifeSciences.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Editing SlVI Gene Enhances Sugar Content and Disease Resistance in Tomato Fruit