Big Self-Supervised Models Advance Medical Image Classification

    January 2021 in “ arXiv (Cornell University)
    Shekoofeh Azizi, Basil Mustafa, Fiona Ryan, Zachary Beaver, Jan Freyberg, Jonathan Deaton, Aaron Loh, Alan Karthikesalingam, Simon Kornblith, Ting Chen, Vivek Natarajan, Mohammad Norouzi
    TLDR Self-supervised learning improves medical image classification accuracy.
    The study explored the use of self-supervised learning as a pretraining strategy for medical image classification, focusing on dermatology skin condition classification and multi-label chest X-ray classification. By employing self-supervised learning on ImageNet followed by domain-specific medical images, the researchers significantly improved the accuracy of medical image classifiers. They introduced a novel Multi-Instance Contrastive Learning (MICLe) method, which utilized multiple images per patient case to create more informative positive pairs. This approach led to a 6.7% improvement in top-1 accuracy for dermatology and a 1.1% improvement in mean AUC for chest X-ray classification, surpassing strong supervised baselines. The study also found that large self-supervised models were robust to distribution shifts and effective with limited labeled medical images.
    Discuss this study in the Community →