In this section, we present the methodology adopted in our proposed approach for the segmentation of lesions from skin in the presence of artifacts like skin lines, vessels, gel and hairs. The proposed algorithm consist of three stages which includes: pre-processing stage for image enhancement along with hair detection/inpainting for artifact removing; segmentation of the lesion area using wavelet based approach and then finally post processing stage for improving segmentation results. The flow diagram of the proposed methodology is presented in Fig. 4. The proposed system takes a dermoscopic image as an input and color enhancement along with thresholding is performed at pre-processing stage. Also hairs, the most unwanted artifact, are removed to improve the segmentation results. This is accomplish by hair enhancement, then segmentation to detect hairs and further removing them by inpainting the hair pixels. The system then performs the detection of four corners to eliminate the undesired details and extract the area of interest for segmentation. In the final step lesion is segmented from the background skin image by using wavelet transformation.
Input image to system
In this work images from PH2 data-set are used as input to the proposed system. Images are well diversified in nature and contain number of artifacts which can cause the segmentation more challenging. PH data-set contains total 200 dermoscopic images, obtained from Hospital Pedro Hispano database which includes different type of image variations like melanocytic nevi and melanomas. These are 8 bit RGB color images with dimensions 768 × 560 pixels. All experimentation, analysis and results are based on these dermoscopic images, some examples are shown in Fig. 5.
Image pre-processing
The first step in image segmentation is to prepare the image for segmentation. This is mostly done by applying some pre-processing techniques on the dermoscopic images. The proposed pre-processing stage involves several steps which are described below.
Active contour
In this section, we present our technique to define active contour i.e. the active area of interest to work with. This step is required as dermoscopic image contain a rounded background on each corner of image. Active contour is a model which describes the boundaries of shape in an image. It is particularly designed for the problems where the approximate shape of the boundary is already known. However, it also has the few drawbacks such as they are sensitive to local minima state, minute feature are often ignored and their accuracy depend on the convergence policy (Qian et al. 2013). An active contour or a simple elastic snake can be represented by the energy function defined by n points \(V_{i}\) where \(i=0,1,2,3,\ldots ,n\) like
$$ E_{snake} = \int _{0}^{1}(E_{internal}(v(s))+ E_{image}(V(s))+ E_{con}(v(s)) ) ds $$
(1)
Energy function of snake is sum of its external and internal energy. As internal energy \(E_{internal}\) is composed of continuity in contour and smoothness of contour and \(E_{snake}\) is the sum of all the forces due to the image itself \(E_{image}\) and the constant force introduce by user i.e. \(E_{con}\). The basic purpose of contour is to find out the area of interest in order to reduce any unwanted area. In proposed research, area inside the circular boundary is area of interest and thus dark corners around the image can be eliminated by finding the active contour. Figure 8 shows the dark corners around the images that affect the segmentation process. Once the active contour is selected then image is further converted into binary image which is used as a binary mask to extract the area of interest.
Color enhancement
In this section, we present our approach for the selection of color enhancement technique which improves segmentation results for dermoscopic images. The most simple method for color enhancement is to find out Luminance by linearly combining RGB values into a single value using following formula.
$$ Luminance=R\times 0.2989+G\times 0.5870+B\times 0.1140 $$
(2)
Output of different color enhancement techniques are shown in Fig. 6, however after experiments it has been found that blue channel from RGB value produces better results as compare to other techniques described below.
RGB with highest entropy
Entropy provide the measurement of image smoothness such that higher the entropy higher the number of gray levels. In this work the entropy of each color is computed on the basis of following formula as:
$$ E(c)= \sum _{i=0}^{L-1}p(i)\cdot log_{2}p(i) $$
(3)
where p(i) is the probability of occurrence for each intensity i in the image and c denotes the respective color channel. After the entropy computation, the RGB color component with highest entropy is selected as:
$$ i = arg\,max(E(i)) $$
(4)
where \(arg\,max(\cdot )\) function returns the argument of maximum entropy and i denotes RGB color component.
L*a*b color selection
L*a*b color space by breaking it into L, a and b component has also been experimented during analysis. As L*a*b is representation of CIE 1976 (L*, u*, v*) color space where L represent Lightness and a and b are color-opponent dimensions.Footnote 1
Blue color selection
This phase analyzes the RBG color space of dermoscopic images for the segmentation of lesion from skin. After the details analysis, color enhancement technique based on blue components is selected for further processing, which gives better segmentation results as compare to other techniques. Therefore blue component is used only because of clear color segmentation between the lesion and normal skin.
Gray thresholding
In this approach, image is first converted into gray scale image and then thresholding is applied on the inputted image. In this work Otsu’s method is utilized to find the threshold which exhaustively searches for the threshold that minimizes the intra-class variance as:
$$ \sigma _{x}^{2}(t) = \omega _{1}(t)\sigma _{1}^{2}(t)+\omega _{2}(t)\sigma _{2}^{2}(t) $$
(5)
where weights \(\omega _{1}\), \(\omega _{2}\) are the probabilities of the two classes of pixels in an image, which is separated by a threshold t and \(\sigma _{1}^{2}\), \(\sigma _{2}^{2}\) are variances of these two classes. The purpose of this step is to equally distribute the color along the image and try to accommodate any shadow and higher variation in images which can affect the segmentation process. It is observed during experimentation that thresholding improves the accuracy more than 3–4 %.
Hair removal
Skin hairs frequently appears in dermoscopic images on background skin. Also, hairs partly covered the lesions which causes interference in reliable lesion segmentation. Therefore, hairs should be detected and excluded from dermoscopic image before the inception of skin lesion segmentation procedure. The hair removal process involves three steps i.e. hair enhancement, hair segmentation and hair in-painting. There are number of hair removing methods discussed in literature (Abbasi et al. 2004). During the experiments it has been found that simple morphological operations and directional filter are simple techniques to be used for the hair removal. Moreover, for morphological processing there is a tradeoff between the image edge blurring and the size of structuring elements (Gonzalez and Woods 2002). Due to this morphological trade-off, hair detecting based on direction filter gives better results.
Hair enhancement
Hairs are enhanced before segmentation and removal. To accomplish this task, line directional filters are applied by utilizing the following Gaussian filters as:
$$ g(x,y)= G_{1}(x,y)-G_{2}(x,y)$$
(6)
$$\begin{aligned} g(x,y) &= k_{1e}^{\left[ -\left( \frac{x^{2}}{2\sigma _{x1}^{2}}+\frac{y^{2}}{2\sigma _{y1}^{2}}\right) \right] }-K_{2e}^{-\left[ -\left( \frac{x2}{2\sigma _{x2}^{2}}+\frac{y2}{2\sigma _{y2}^{2}}\right) \right] } \end{aligned}$$
(7)
where \(k_{1}\) and \(K_{2}\) are constant. The rotation of g(x, y) along angle is given by
$$g_{\phi i}(x{'},y{'}) = g(x,y)$$
(8)
where \(x{'} = xcos\theta +ysin\theta \) and \(y{'} = ycos\theta -xsin\theta \).
The response of each filter for any input image is given by
$$R_{i}(x,y) = g_{\phi _{i}}(x,y)\otimes I(x,y)$$
(9)
where \(\otimes \) is the special convolution. This step is depicted in Fig. 7b.
Hair segmentation
To exclude hairs from lesion and background skin, hair segmentation is performed by thresholding the dermoscopic image such that
$$ H(x,y) = \Big \{_{1,\quad if\,I(x,y)\,\le\,T_{hair}}^{0,\quad if\,I(x,y)\,>\,T_{hair}} $$
(10)
where H(x, y) is binary hair mask.
Hair in-painting
After hair segmentation, in-painting is performed to remove hairs and fill pixels with appropriate color information as shown in Fig. 7d. This phase utilizes binary hair mask for in-painting by using PDE based algorithm, which fills the hairs with the values along the level lines called isophotes. We also employ image smoothing to remove any dark spot and slightly remaining unwanted hair pixels. To remove aforementioned noise, we utilized median filter due to its nature of removing noise without blurring the image. It will replace a gray level pixel with median of neighborhood pixels.
$$y(i,j) = median\{x(m,n),(m,n)\epsilon w(i,j)\}$$
(11)
Detection of four dark corner
Most of the dermoscopic images contain black corners which is mainly due to the use round circular lens designed for a smaller sensor in dermatoscope. Therefore image circle can not illuminate a large enough area and thus cause dark corners. Normally these rounded shape dark corners has nearly the same intensity of skin lesion. Therefore, these dark corners must be removed to improve the performance of segmentation algorithm. In order to remove dark corners from the image, we employ thresholding based on Otsu’s method which will make a binary mask for these dark corners. The Otsu’s method divide the image into two classes \(C_{1}\) and \(C_{2}\) by threshold k such that
$$C_{0} = \{0,1,2,3,\ldots ,k\}\quad and\quad C_{1}=\{k+1,k+2,\ldots ,L-1\}$$
(12)
where L is total number of gray levels in image. The binary mask created by this method is used to remove the dark corners from dermoscopic image. Figure 8 shows the dark round corner dermoscopic image along with a binary mask generated using Otsu’s method based thresholding.
Segmentation using wavelet transform
After hair detection and in-painting, processed dermoscopic images still may contains some artifacts like water bubble and gel. In this work, we employ discrete wavelet transformation (DWT) for segmentation of lesion from skin in presence of aforementioned artifacts. Wavelet transforms utilizes the similar approach like Fourier transform. It converts the signal into frequency domain and also provides the time resolution for the converted signal. Instead of decomposition of signals into sum of sin and cosine functions, wavelets decomposes into wavelet coefficients. Variety of DWT flavors exits in literature known as mother wavelet families. The DWT is like a sub band system in which signal is decomposed into details H (Horizontal), V (Vertical) and D (Diagonal) and A (Approximation) bands.
Approximation is the image approximation remaining after removing the details. Then details of image is further divided into the horizontal details, vertical details and diagonal details. Approximation can further divided to to next level of approximation and details as shown in Fig. 9. Where H1 gives the horizontal detail, D1 gives the diagonal detail, V1 gives the vertical detail and A1 gives the approximation detail which could be further decomposed to next level. This property of wavelet could be used for image segmentation in medical imaging.
Moreover, wavelets are being used in image processing for de-noising, edge detection, segmentation, compression, encoding and decoding. In this work, we are interested in image segmentation whereas most of the early work in wavelets are on texture based analysis therefore many texture based segmentation methods exists in literature. However, in case of dermoscopic images most of the proposed works exploits wavelet transform fuzzy algorithms (Castillejos et al. 2012). Recently, Sadri et al. proposed a new approach for the segmentation of skin lesion by using wavelet networks (Sadri et al. 2013). Here, wavelet transform is utilized for the segmentation as well as denoising of input image. It has been observed during the experiments that wavelet are very useful in case of removing some artifacts on dermoscopic images like water bubbles and gel effects. Even hairs are broken into minute fragments which can be remove by applying morphological operations as illustrated in Fig. 10.
In this work, detail qualitative analysis is performed for the selection of suitable mother wavlet family. The Cohen–Daubechies–Feauveau biorthogonal wavelet is selected and applied on the blue channel of pre-processed image because its demonstrate superiority as compared to other mother wavelet families. During experimentation second level approximate wavelet component gives best results on the inputted image. Although, different combinations are tried by combining two components with different orientation but it has been found that the best results are being obtain through approximation.
Post processing
After the wavelet transformation, post processing operations are performed to find the final segmented binary result, by keeping large connected binary objects and joining adjacent binary regions. As the processed image at this stage may contain holes due to the intensity difference in skin lesion image. Therefore, morphological operations are performed to fill holes and remove any extra elements other than the skin. The regions belongs to the dark corners around the image is removed by the binary mask in pre-processing step. However the small isolated islands are kept and joined together if they are very near to skin lesion. On the other hand, islands far away from skins lesion are removed by morphological erosion and dilation operations.
Finally, the segmented binary lesion images have ragged boundaries which requires smoothing. This can be done by the convolution filter but in this work smoothing operation is performed by using average filter.
$$ I_{out}(i)=\frac{1}{W}\sum _{j=-(w-1)/2}^{(w-1)/2}I_{int}(i-j) $$
(13)
where \(I_{out}\) is input boundary coordinated and W is the filtering degree. Figure 11 illustrates the overall process for segmentation of dermoscopic images.