Broder, J. S. Diagnostic Imaging for the Emergency Physician (ed. Broder, J. S.) Ch. 5, 185–296 (W. B. Saunders, 2011).
Çallı, E., Sogancioglu, E., van Ginneken, B., van Leeuwen, K. G. & Murphy, K. Deep learning for chest X-ray analysis: a survey. Med. Image Anal. 72, 102125 (2021).
Tajbakhsh, N., Roth, H., Terzopoulos, D. & Liang, J. Guest editorial annotation-efficient deep learning: the holy grail of medical imaging. IEEE Trans. Med. Imaging 40, 2526–2533 (2021).
Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H. & Aerts, H. J. Artificial intelligence in radiology. Nat. Rev. Cancer 18, 500–510 (2018).
Zhou, Y. et al. A foundation model for generalizable disease detection from retinal images. Nature 622, 156–163 (2023).
Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T. J. & Zou, J. A visual–language foundation model for pathology image analysis using medical twitter. Nat. Med. 29, 2307–2316 (2023).
Christensen, M., Vukadinovic, M., Yuan, N. & Ouyang, D. Vision–language foundation model for echocardiogram interpretation. Nat. Med. 30, 1481–1488 (2024).
Tiu, E. et al. Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning. Nat. Biomed. Eng. 6, 1399–1406 (2022).
Zhang, X., Wu, C., Zhang, Y., Xie, W. & Wang, Y. Knowledge-enhanced visual-language pre-training on chest radiology images. Nat. Commun. 14, 4542 (2023).
Sellergren, A. B. et al. Simplified transfer learning for chest radiography models using less data. Radiology 305, 454–465 (2022).
Azizi, S. et al. Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging. Nat. Biomed. Eng. 7, 756–779 (2023).
Xu, S. et al. ELIXR: towards a general purpose X-ray artificial intelligence system through alignment of large language models and radiology vision encoders. Preprint at arxiv.org/abs/2308.01317 (2023).
Basdevant, A. et al. Towards a framework for openness in foundation models: proceedings from the Columbia Convening on openness in artificial intelligence. Preprint at arxiv.org/abs/2405.15802 (2024).
Ma, D., Pang, J., Gotway, M. B. & Liang, J. Foundation Ark: accruing and reusing knowledge for superior and robust performance. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention (eds Greenspan, H. et al.) 651–662 (Springer, 2023).
Liu, Z. et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proc. IEEE/CVF International Conference on Computer Vision (eds Hassner, T. et al.) 10012–10022 (IEEE, 2021).
Velan, S. S. Benchmarking and Boosting Localizers for Chest X-rays. Master’s thesis, Arizona State Univ. (2024).
Saravanan, M. Benchmarking and Boosting of 3D Segmentation Models. Master’s thesis, Arizona State Univ. (2024).
Islam, N. U. et al. Foundation X: integrating classification, localization, and segmentation through lock-release pretraining strategy for chest X-ray analysis. In Proc. IEEE/CVF Winter Conference on Applications of Computer Vision (eds Biswas, S. et al.) 3647–3656 (IEEE, 2025).
Wang, X. et al. Chestx-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (eds Cucchiara, R. et al.) 2097–2106 (IEEE, 2017).
Pérez-García, F. et al. Exploring scalable medical image encoders beyond text supervision. Nat. Mach. Intell. 7, 119–130 (2025).
Ma, D. et al. Benchmarking and boosting transformers for medical image classification. In Proc. MICCAI Workshop on Domain Adaptation and Representation Transfer (eds Kamnitsas, K. et al.) 12–22 (Springer, 2022).
Cho, K. et al. Chess: chest X-ray pre-trained model via self-supervised contrastive learning. J. Digit. Imaging 36, 902–910 (2023).
Kang, M. et al. Label-assemble: leveraging multiple datasets with partial labels. In Proc. 20th International Symposium on Biomedical Imaging (eds Salvado, O. et al.) 1–5 (IEEE, 2023).
Lee, J. et al. Deep learning for rare disease: a scoping review. J. Biomed. Inform. 135, 104227 (2022).
Yaqing, W., Quanming, Y., Kwok James, T. & Ni Lionel, M. Generalizing from a few examples: a survey on few-shot learning. ACM Comput. Surv. 53, 1–34 (2020).
Holste, G. et al. CXR-LT: multi-label long-tailed classification on chest X-rays. PhysioNet 5, 19 (2023).
Zhou, S. K. et al. A review of deep learning in medical imaging: imaging traits, technology trends, case studies with progress highlights, and future promises. Proc. IEEE 109, 820–838 (2021).
Wang, D. et al. A real-world dataset and benchmark for foundation model adaptation in medical image classification. Sci. Data 10, 574 (2023).
Zhang, L. et al. Generalizing deep learning for medical image segmentation to unseen domains via deep stacked transformation. IEEE Trans. Med. Imaging 39, 2531–2540 (2020).
Cohen, J. P. et al. TorchXRayVision: a library of chest X-ray datasets and models. In Proc. International Conference on Medical Imaging with Deep Learning (eds Konukoglu, E. et al.) 231–249 (PMLR, 2022).
Glocker, B., Jones, C., Roschewitz, M. & Winzeck, S. Risk of bias in chest radiography deep learning foundation models. Radiol.: Artif. Intell. 5, e230060 (2023).
Seyyed-Kalantari, L., Zhang, H., McDermott, M. B., Chen, I. Y. & Ghassemi, M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat. Med. 27, 2176–2182 (2021).
Larrazabal, A. J., Nieto, N., Peterson, V., Milone, D. H. & Ferrante, E. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proc. Natl Acad. Sci. USA 117, 12592–12594 (2020).
Irvin, J. et al. CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In Proc. AAAI Conference on Artificial Intelligence, Vol. 33 (eds Hentenryck, P. V. & Zhou, Z. H.) 590–597 (AAAI, 2019).
Wang, L., Lin, Z. Q. & Wong, A. Covid-net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 10, 19549 (2020).
Liu, F. et al. A medical multimodal large language model for future pandemics. npj Digit. Med. 6, 226 (2023).
Xiao, J., Bai, Y., Yuille, A. & Zhou, Z. Delving into masked autoencoders for multi-label thorax disease classification. In Proc. IEEE/CVF Winter Conference on Applications of Computer Vision (eds Crandall, D. et al.) 3588–3600 (IEEE, 2023).
Van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).
Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Multimodal biomedical AI. Nat. Med. 28, 1773–1784 (2022).
Soenksen, L. R. et al. Integrated multimodal artificial intelligence framework for healthcare applications. npj Digit. Med. 5, 149 (2022).
Ye, M., Fang, X., Du, B., Yuen, P. C. & Tao, D. Heterogeneous federated learning: state-of-the-art and research challenges. ACM Comput. Surv. 56, 1–44 (2023).
Nguyen, H. Q. et al. VinDr-CXR: an open dataset of chest X-rays with radiologist’s annotations. Sci. Data 9, 429 (2022).
Anouk Stein, M. et al. RSNA Pneumonia Detection Challenge. Kaggle (2018).
Jaeger, S. et al. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant. Imaging Med. Surg. 4, 475 (2014).
Johnson, A. E. et al. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 6, 317 (2019).
Tajbakhsh, N. et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans. Med. Imaging 35, 1299–1312 (2016).
Zawacki, A. et al. SIIM-ACR pneumothorax segmentation. Kaggle (2019).
Sogancioglu, E. et al. Nodule detection and generation on chest X-rays: NODE21 challenge. IEEE Trans. Med. Imaging 43, 2839–2853 (2024).
Goldbaum, M., Kermany, D. & Zhang, K. Labeled optical coherence tomography (OCT) and chest X-ray images for classification. Mendeley Data (2018).
Liu, Y., Wu, Y.-H., Ban, Y., Wang, H. & Cheng, M.-M. Rethinking computer-aided tuberculosis diagnosis. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds Liu, C. et al.) 2646–2655 (IEEE, 2020).
Khosla, P. et al. Supervised contrastive learning. In Proc. 33rd Advances in Neural Information Processing Systems (eds Larochelle, H. et al.) 18661–18673 (Curran Associates, 2020).
Oquab, M. et al. DINOv2: learning robust visual features without supervision. Transact. Mach. Learn. Res. (2024).
Xie, Z. et al. SimMIM: a simple framework for masked image modeling. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds Dana, K. et al.) 9653–9663 (IEEE, 2022).
Chen, X., Fan, H., Girshick, R. & He, K. Improved baselines with momentum contrastive learning. Preprint at arxiv.org/abs/2003.04297 (2020).
Cohen, J. P., Hashir, M., Brooks, R. & Bertrand, H. On the limits of cross-domain generalization in automated X-ray prediction. In Proc. Medical Imaging with Deep Learning (eds Arbel, T. et al.) 136–155 (PMLR, 2020).
Unal, I. Defining an optimal cut-point value in roc analysis: an alternative approach. Comput. Math. Methods Med. 2017, 3762651 (2017).
Jennewein, D. M. et al. The Sol supercomputer at Arizona State University. In Proc. Practice and Experience in Advanced Research Computing (eds Sinkovits, R. & Romanella, A.) 296–301 (ACM, 2023).
Song, C., Granqvist, F. & Talwar, K. Flair: federated learning annotated image repository. In Proc. 35th Advances in Neural Information Processing Systems (eds Koyejo, S. et al.) 37792–37805 (Curran Associates, 2022).
Yan, R. et al. Label-efficient self-supervised federated learning for tackling data heterogeneity in medical imaging. IEEE Trans. Med. Imaging 42, 1932–1943 (2023).