M24-213L: ASA: Learning Anatomical Consistency, Sub-volume Spatial Relationships and Fine-grained Appearance for CT Images- Researchers at Arizona State University have developed a self-supervised learning approach to enhance segmentation performance on 3D computed tomography images by focusing on learning anatomical consistency, sub-volume order, and fine-grained appearance features. Utilizing symmetry and inherent spatial patterns in medical images, ASA employs sub-volume order prediction, volume appearance recovery, and global and local feature alignment via a student-teacher network to efficiently train models without the need for large, annotated datasets. Please see Pang et al - Springer Nature-2024 for additional information.
M24-296L & M25-166L: ACE: Anatomically Consistent Embeddings via Composition and Decomposition- Researchers at Arizona State University have developed a novel self-supervised learning approach called ACE which introduces a dual consistency approach targeting both global macro-structures and local fine-grained details within medical images through contrastive learning and matrix matching. Utilizing grid-wise image cropping for precise patch matching, ACE excels in capturing the compositional and decompositional features of anatomical structures. Validated on chest X-ray datasets, it shows superior robustness, transferability, and clinical potential across classification, segmentation, and key-point detection tasks. Please see Zhou et al - IEEE-2025 for additional information.
M25-126L: Autodidactic Dense Anatomical Models - Researchers at Arizona State University have developed a self-supervised learning framework, Adam–v2 which leverages a three-branch architecture: localizability, composability, and decomposability, to model the hierarchical and compositional nature of anatomical structures in medical images. This improves performance in various medical imaging tasks such as segmentation and disease classification while demonstrating strong generalizability and emergent understanding of anatomical layouts. Please see Ma et al.-Nature-2025 for additional information.
M25-141L: Foundation X: Integrating Classification, Localization, and Segmentation through Lock-Release Pretraining Strategy for Chest X-ray Analysis – Researchers at Arizona State University have developed an end-to-end framework, named Foundation X, which utilizes diverse expert-level annotations from various public datasets in order to train a foundation model which is capable of multiple tasks including classification, localization and segmentation. Foundation X achieves significant performance gains through extensive annotation utilization helping it excel in cross-dataset and cross-task learning to enhance organ localization and segmentation. Please see Islam et al – WACV – 2025 for additional information.
M25-222L: Ark+ : Accruing and Reusing Knowledge for Superior and Robust Foundation Models- Researchers at Arizona State University have developed a framework, Ark+, designed to build high-performing foundation models for medical imaging by leveraging knowledge accrued from multiple public datasets with heterogeneous expert annotations. Ark+ supports distributed training across proprietary data sources, adapts to various model architectures and imaging modalities, and ensures robustness against biases to deliver reliable diagnostic performance. Please see Ma et al.-Springer Nature-2023 for additional information.
M25-253L: Benchmarking and Boosting of 3D Segmentation Models - Researchers at Arizona State University have developed a comprehensive benchmark that identifies the top performers for 3D single-task segmentation with limited data. It also employs pretraining strategies to develop a multi-task model capable of joint learning from multiple heterogeneous datasets. This model was pretrained on 16 public datasets with 3,000 CT scans annotated for 25 organs and 6 tumors and was able to outperform both Swin UNETR and CLIP-driven Universal.
M26-007L: Test Suite for Chest Radiography- Researchers at Arizona State University have developed a comprehensive test suite that presents an extensive evaluation framework for foundation models applied to chest radiography. This test suite focuses on five critical tasks: novelty detection, segmentation with limited data, anatomical structure matching, pattern retrieval in health and disease, and anatomy understanding. It benchmarks eight state-of-the-art large-scale medical models, leveraging diverse datasets such as COVIDxCXR-3 and ChestX-ray14, to assess their performance, adaptability, and clinical potential in medical imaging.
M26-008L: LAnD: Local Anatomical and Disease Feature Learning Model - Researchers at Arizona State University have developed a framework that employs an alternating knowledge distillation approach between anatomical and disease expert models, enabling effective learning without the need for large datasets or complex pretext tasks. It features a shared student network that synthesizes insights from both experts, leading to improved disease discrimination, anatomical comprehension, and robustness against challenges such as gender-imbalanced data. This framework achieves superior performance without requiring human annotations or complex pretext tasks, making it a versatile and robust solution for comprehensive medical image analysis.
M26-080L: Cross-Modal Knowledge Distillation for Chest Radiographic Diagnosis – Researchers at Arizona State University have developed a cross-modal knowledge distillation, CREED for chest radiographic diagnoses. Through three learning objectives: (1) embedding reconstruction to preserve fine-grained language information, (2) diagnostic classification to bridge the modality gap, and (3) KL-divergence minimization to enforce alignment between vision and language embeddings, CREED is able to achieve expert-level performance for chest radiographic diagnosis. Please see Ma et al – Conference Proceeding - 2025
M26-082L: Foundation Model Evaluation Suite for Chest Radiography - Researchers at Arizona State University have developed a comprehensive test suite which offers a systematic framework to assess the performance of foundation models on six critical chest radiography tasks, including novelty detection, recognizing abnormality via anomaly detection, organ and lesion segmentation, few-shot segmentation, anatomical structure matching, retrieval of anatomical patterns, and anatomical understanding without excessive training. By evaluating multiple recent models across diverse datasets and settings, it provides insights into their clinical strengths and limitations, fostering robust generalization in healthcare applications.