DSNet: Automatic dermoscopic skin lesion segmentation

Qualitative results of DSNet

Abstract

Automatic segmentation of skin lesions is considered a crucial step in Computer-aided Diagnosis (CAD) systems for melanoma detection. Despite its significance, skin lesion segmentation remains an unsolved challenge due to their variability in color, texture, and shape and indistinguishable boundaries. Through this study, we present a new and automatic semantic segmentation network for robust skin lesion segmentation named Dermoscopic Skin Network (DSNet). In order to reduce the number of parameters to make the network lightweight, we used a depth-wise separable convolution instead of standard convolution to project the learned discriminating features onto the pixel space at different stages of the encoder. We also implemented a U-Net and a Fully Convolutional Network (FCN8s) to compare against the proposed DSNet. We evaluate our proposed model on two publicly available datasets, ISIC-2017 and PH2. The obtained mean Intersection over Union (mIoU) is 77.5% and 87.0%, respectively, for ISIC-2017 and PH2 datasets, which outperformed the ISIC-2017 challenge winner by 1.0% concerning mIoU. Our proposed network outperformed U-Net and FCN8s by 3.6% and 6.8% concerning mIoU on the ISIC-2017 dataset, respectively. Our network for skin lesion segmentation outperforms the other methods discussed in the article. It can provide better-segmented masks on two different test datasets, leading to better performance in melanoma detection. Our trained model, source code, and predicted masks are made publicly available.

Publication
Computers in Biology and Medicine, Volume 120, 2020, 103738, ISSN 0010-4825

Related