-
[pdf]
[supp]
[bibtex]@InProceedings{Hasan_2024_CVPR, author = {Hasan, Yumnah and Khan, Talhat and De Bulnes, Darian Reyes Fernandez and Albarracin, Juan F H and Ryan, Conor}, title = {A Comparative Analysis of Implicit Augmentation Techniques for Breast Cancer Diagnosis Using Multiple Views}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {2345-2354} }
A Comparative Analysis of Implicit Augmentation Techniques for Breast Cancer Diagnosis Using Multiple Views
Abstract
The Design of effective deep-learning methods for medical image analysis represents a great challenge given the scarcity of balanced datasets leading to biased results and overfitting. Data augmentation mitigates these limitations due to its effectiveness in increasing the diversity and quantity of training data but the selection of an appropriate augmentation method strongly depends on the problem domain. In this study we investigate the effects of various feature-level augmentation methods on the performance of Deep-Learning-based Breast Cancer (BC) diagnosis using mammographic images of Craniocaudal (CC) and Mediolateral Oblique (MLO) views. Through quantitative performance evaluations we systematically assess the impact of augmentation techniques on classification using two feature extraction techniques namely Haralick features and deep GoogleNET features. Our experiments conducted on the Digital Database for Screening Mammography (DDSM) and the Wisconsin Breast Cancer (WBC) datasets reveal that Mixup when combined with STEM outstands as the most promising in a wide range of scenarios.
Related Material