Be part of the knowledge.
Register

We’re glad to see you’re enjoying ReachMD…
but how about a more personalized experience?

Register for free
  1. Home
  2. Medical News
  3. Breast Cancer

Automatic Detection of Breast Lesions in Automated 3D Breast Ultrasound with Cross-Organ Transfer Learning

ReachMD Healthcare Image
08/20/2024
News Faviconeurekalert.org

image: 

comparative experiment has been designed to validate our hypothesis as shown in the figure below. We used lung cancer and ABUS datasets to pre-train our previously proposed architecture. We focused on learning the common features of the two diseases and the empirical parameters obtained from the corresponding detection tasks. Subsequently, we obtained two pretrained models with two different datasets: pretrained models A and B. We switched the pretraining datasets for the two tasks to the same ABUS training set and continued to train the two different pretrained models. Following the training, we obtained Model D with lung cancer detection experience and Model C with only ABUS breast cancer detection experience. Finally, we tested the performances of models C and D using the same test set to validate our ideas.

view more 

Credit: Beijing Zhongke Journal Publising Co. Ltd.

Ultrasonography serves as a valuable supplement to mammography in breast cancer screening. Considering Berg et al., adding ultrasound to mammography increased the diagnostic yield from 7.6 per 1000 women screened to 11.8. A recent study showed that ultrasound outperformed mammography in breast cancer screening for high-risk Chinese women, and the economic cost of using ultrasound was lower than that of mammography. Moreover, Kelly et al. demonstrated that combining ultrasound with mammography significantly increased breast cancer detection sensitivity when using an automated scanning ultrasound system. Conventional two-dimensional (2D) ultrasound has been widely used because it can dynamically display 2D images of a region of interest (ROI) in real time. However, clinicians must mentally imagine the volume using planar 2D images when they need a view of three-dimensional (3D) anatomic structures. These results are heavily dependent on the experience and knowledge of clinicians.

To address this problem, 3D ultrasound has been proposed to help diagnosticians understand spatial anatomical relationships. An Automated Breast Ultrasound System (ABUS) is a typical 3D ultrasound system capable of covering breast tissue in all directions. It scans layer-by-layer and provides ultra-high-speed imaging to form a clear volume image. However, implementing ABUS in screening will substantially increase the radiologists' workload because reading ABUS screening exams is more time-consuming than reading mammograms. In addition, lesions may be overlooked more quickly on volumetric ABUS images than on mammography. Cancers appear like other typical hypoechoic structures and benign lesions. Therefore, techniques to assist radiologists in reading ABUS images to allow more efficient reading while avoiding overlooked lesions must be developed. Various studies have shown that CAD can improve reader performance in breast cancer detection.

Currently, most CAD software programs are designed based on deep convolutional neural networks (DCNN). The methods and strategies for cancer detection using DCNN share several common characteristics. First, a region of interest that often requires accurate lesion segmentation or a bounding box surrounding the lesion is identified. Second, features were extracted from the boundary, shape, posterior shadowing, speculation or retraction, and texture. These are subsequently used to train classifiers, such as linear discriminant analysis, support vector machines, or artificial neural networks. However, a common issue with these methods is the use of small datasets for machine or deep learning, which can lead to overfitting and data imbalance problems. Outliers may also appear in the features or response variables, which can further reduce the model stability. Therefore, it is essential to consider the size and quality of datasets when using these methods for cancer detection.

Several methods have been proposed to address the issues arising from small datasets, including data augmentation, small-model training, and data reuse. Because big data repositories have become increasingly popular, transfer learning has emerged as the best approach that utilizes existing datasets related to, but not identical to, the target area of interest Moreover, self-supervised representation learning using contrastive learning has achieved the most advanced performance in training deep learning image models. Modern batch comparison methods have replaced traditional contrastive losses such as triple ancestors, maximum marginal, and N-pair losses. Furthermore, these are significantly superior to conventional contrastive losses. These losses are used to learn powerful representations, typically in a supervised environment in which labels guide the positive and negative choices.

We observed that using other datasets from the same organ (i.e., breast) for transfer learning was useful; however, medical datasets often involve patient privacy. Therefore, obtaining several publicly available datasets for pretraining was impossible. Moreover, owing to the early popularization and large promotion scope, some organs (e.g., lungs) have a relatively large number of publicly available datasets. We also observed that although the pathologies of lung and breast cancers were different, detecting the results of different modes, they had similar characteristics at the image level. Their experience in the detection and localization of pathological tissues could be used as a reference. Breast cancer cases from ABUS (first row), and pulmonary nodule cases from LIDC/IDRI database (second row); different vision of transverse plane (first column), sagittal plane (second column), and coronal plane (third column).

Therefore, this study proposes a deep learning framework for breast cancer detection and cross-organ transfer learning, generalizing the experience from pulmonary nodules to breast cancer. Moreover, we have applied a supervised contrastive learning method to distinguish the feature vectors of normal breast tissue and breast nodules in the feature space. We have also investigated the contribution of this method to improving the breast cancer detection performance of the model. Our contributions can be summarized as follows:

  1. We have examined the application of transfer learning in a cross-organ field, have applied the experience of lung nodule detection to breast cancer detection, and have demonstrated its effectiveness.
  2. We have also applied contrastive learning algorithms based on the BI-RADS and have examined how this method affects the model to achieve an improved performance.


Journal

Virtual Reality & Intelligent Hardware

Article Title

Automatic detection of breast lesions in automated 3D breast ultrasound with cross-organ transfer learning

Article Publication Date

27-Jun-2024

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Schedule5 Oct 2024