In this paper, we introduce a new technique of artifact cancelation in ICG signals for remote health care monitoring systems. During signal acquisition in a typical ICG remote health care monitoring system, some physiological and non-physiological contaminants add to the actual heart activity, leading to ambiguous diagnoses and measurements. In addition to these contaminants, channel noise also masks the tiny features of the ICG signal. The major artifacts encountered with heart activity are the baseline wander artifact (BWA), the electro-muscle artifact (EMA) and the impedance mismatch artifact (IMA). The BWA is a base-line drift of the ICG signal from respiration activity. The EMA is caused by muscle activity, and the IMA is caused by an impedance mismatch between the electrodes and the skin, or from a mismatch of the electrodes. At the receiving end, a clear high-resolution signal is required to present to the doctor for diagnosis. In this context, AAC plays an important role. Figure 1 shows a block diagram of a wavelet-based AAC for remote health care monitoring systems.
The recorded ICG signal with artifact contaminants is expressed as follows:
$$ICG(n) = s(n) + n_{1} (n)$$
where ICG(n) is the recorded ICG signal; s(n) is the original ICG signal generated from heart activity; and n
1(n) is the artifact component (BWA or EMA or IMA or any combination of these three). In a remote system, n
1(n) also includes channel noise.
The basic working principle of the proposed AAC is the following. The raw signal ICG(n) is input to the DWT decomposition unit. Using decomposition, a reference signal is constructed for any type of contamination present in the raw input ICG signal. The constructed reference signal is used as the reference signal for the adaptive algorithm to update its weight coefficients. The proposed AAC thus plays a vital role in the implementation of an intelligent remote health care monitoring system that is reference-free by constructing the reference signal itself from the contaminated input signal.
Construction of the reference signal from the noisy input signal
The wavelet transform is used for signal decomposition in our model. It provides the temporal information for the signals whose frequency components are changing with time. The wavelet decomposition is a process of separating the signal into spectrally non-overlapping components. There are two categories of wavelet decomposition: continuous wavelet transforms (CWT) and discrete wavelet transforms (DWT). The CWT for a signal s(n) is as follows:
$$CWT\left({a,b} \right) = \mathop \int \nolimits_{- \infty}^{\infty} s\left(t \right)\frac{1}{\sqrt a}\varphi \left({\frac{t - b}{a}} \right)dt$$
(1)
where a and b are the scaling and shifting parameters, respectively, and \(\varphi\)(.) is the mother wavelet function. However, evaluating the scaling (a) and shifting (b) parameters for all possible scales is a computationally in feasible task. One possible way of solving the problem is choosing a and b as powers of two, in which case the DWT is as follows:
$$DWT\left({a,b} \right) = \frac{1}{{\sqrt {2^{l}}}}\mathop \int \nolimits_{- \infty}^{\infty} s\left(t \right)\varphi \left({\frac{{t - 2^{l} m}}{{2^{l}}}} \right)dt$$
(2)
where the scaling and shifting parameters are replaced by 2l and m2l, respectively. Figure 2 shows the L-level wavelet decomposition of a signal s(n). In this scheme, the signal ICG(n)first passes through the LP and HP filters, whose cut-off frequencies are one-fourth of the sampling frequency f
s
and down-sampled by 2, thus yielding an approximation a
1
and detail d
1, which are coefficients of the first level. The same procedure is employed on the first level of the approximation coefficientsa
1
, yielding the second level of approximation and detail coefficients. In this decomposition process, because of the down-sampling, the time resolution is halved and the frequency repulsion is doubled from the filtering operation. The frequency content of the signal at the ith level decomposition is given by \(0 - f_{s} /2^{i + 1}\,\, {\text{and}}\, 0 - f_{s} /2^{i} , i = \left\{ {1,2, \ldots ,L} \right\}\) (Coifman and Donoho 1995; Percival and Walden 2000).
Non-negative LMS-based algorithms for AACs
In the proposed AAC, the input is the raw contaminated ICG signal and the reference is the signal constructed from the DWT decomposition of the raw ICG signal. This process is shown in Fig. 1. The AAC consists of an FIR filter of length L taps. The weight coefficients are updated based on the weight update mechanism of various algorithms. The weight update mechanism for the basic LMS algorithm is as follows,
$$\varvec{W}\left({n + 1} \right) = \varvec{W}(n) + \eta \varvec{r}(n)e(n),$$
(3)
where W(n + 1) is the next weight coefficient; W(n) is the previous weight coefficient; η is the step size; r(n) is the reference signal, which is constructed from the DWT decomposition, required for training to eliminate noise from the raw signal ICG(n); and e(n) is the error generated, which is used as a feedback to the adaptive algorithm.
Because of the abnormalities in the ICG signal, i.e., the drastic variations in the signal features, the weight coefficients may become negative. This leads to poor performance of the adaptive algorithm in terms of convergence, stability and filtering capability. To overcome this drawback, a non-negative LMS (N
2
LMS) algorithm is proposed (Chen et al. 2011). The weight update mechanism is as follows:
$$\varvec{W}\left({n + 1} \right) = \varvec{W}(n) + \eta \varvec{D}(n)\varvec{r}(n)e(n),$$
(4)
where D(n) is the diagonal matrix of the weight coefficients W(n). The elaborated theory and analysis of N
2
LMS is presented by Chen et al. (2011).
In Eq. (4), each component of W(n + 1) is viewed as a variable step because of the combination of ηD(n). In the N
2
LMS algorithm, when the weights tend to zero, the convergence becomes unbalanced and the algorithm may diverge, causing the AAC to be ineffective for noise removal. To avoid the convergence imbalance characteristics in abnormal conditions, the exponential form N
2
LMS (e N
2
LMS) is proposed. The weight update mechanism is then as follows:
$${\mathbf{W}}\left({n + 1} \right) = {\mathbf{W}}(n) + \eta {\mathbf{r}}(n)e(n)W^{\gamma} (n)$$
(5)
For 0 < γ < 1, the nth weight update in Eq. (5) is larger than that in Eq. (4), which accelerates convergence towards the steady state error. Another direct way to accelerate the convergence of N
2
LMS is normalization with respect to data. The normalized N
2
LMS (N
3
LMS) is mathematically expressed as follows:
$${\mathbf{W}}\left({n + 1} \right) = {\mathbf{W}}(n) + \eta (n){\mathbf{D}}(n){\mathbf{r}}(n)e(n)$$
(6)
where η(n) is a variable step size with respect to the reference input as follows:
$$\eta (n) = \frac{\eta}{{\alpha + r^{t} (n)r(n)}}$$
(7)
where α is a small constant used to avoid numerical difficulties. The elaborated theory and analysis of the eN
2
LMS and N
3
LMS algorithms are presented in the literature (Chen et al. 2014a, b).
To minimize the computational complexity of the above algorithms, and hence to make them suitable for remote health care applications, we combine the eN
2
LMS and N
3
LMS algorithms with the simplified algorithms described by Farhang-Boroujeny (Farhang-Boroujeny 1998). The weight update mechanism equations for the eN
2
LMS algorithm variants then become the following:
-
1.
The sign regressor version of the eN
2
LMS algorithm uses the following weight update equation:
$$\varvec{W}\left({n + 1} \right) = \varvec{W}(n) + \eta \;sign\;\left({\varvec{r}(n)} \right)e(n)W^{\gamma} (n)$$
(8)
This algorithm is the sign regressor eN
2
LMS (SReN
2
LMS) algorithm. The major advantage of this algorithm is its low computational complexity in terms of multiplications, independent of filter length. To compute Eq. (8), only one multiplication is required. Another important feature of the sign regressor (SR) algorithm is that its convergence characteristics are only slightly inferior to those of its normal version. This is caused by the normalization involved in the signum function (Farhang-Boroujeny 1998; Eweda 1990; Koike 1999).
-
2.
The sign error version of the eN
2
LMS algorithm uses the following weight update equation:
$$\varvec{W}\left({n + 1} \right) = \varvec{W}(n) + \eta \varvec{r}(n)sign\left({e(n)} \right)W^{\gamma} (n)$$
(9)
This algorithm is the sign error eN
2
LMS (SEeN
2
LMS) algorithm.
-
3.
The sign sign version of the eN
2
LMS algorithm uses the following weight update equation:
$$\varvec{W}\left({n + 1} \right) = \varvec{W}(n) + \eta sign\left({\varvec{r}(n)} \right)sign\left({e(n)} \right)W^{\gamma} (n)$$
(10)
This algorithm is the sign sign eN
2
LMS (SS eN
2
LMS) algorithm.
Similarly, the weight update mechanism equations for the N
3
LMS algorithm variants are written as follows:
-
1.
The sign regressor version of the N
3
LMS algorithm uses the following weight update equation:
$$\varvec{W}\left({n + 1} \right) = \varvec{W}(n) + \eta (n)\varvec{D}(n)sign\left({\varvec{r}(n)} \right)e(n)$$
(11)
This algorithm is the sign regressor N
3
LMS (SRN
3
LMS) algorithm.
-
2.
The sign error version of the N
3
LMS algorithm uses the following weight update equation:
$$\varvec{W}\left({n + 1} \right) = \varvec{W}(n) + \eta (n)\varvec{D}(n)\varvec{r}(n)sign\left({e(n)} \right)$$
(12)
This algorithm is the sign error N
3
LMS (SEN
3
LMS) algorithm.
-
3.
The sign sign version of the N
3
LMS algorithm uses the following weight update equation:
$$\varvec{W}\left({n + 1} \right) = \varvec{W}(n) + \eta (n)\varvec{D}(n)sign\left({\varvec{r}(n)} \right)sign\left({e(n)} \right)$$
(13)
This algorithm is the sign sign N
3
LMS (SSN
3
LMS) algorithm.
In Eqs. (11)–(13), during the normalization process, r
t(n)r(n), in the denominator of η(n), requires L multiplications. To minimize the number of multiplications, we use only the maximum value of r(n) instead of using all L values. The new η(n)is η
m
(n), as follows:
$$\eta_{m} (n) = \frac{\eta}{{\alpha + r_{m}^{t} r_{m}}}$$
(14)
The new weight update mechanisms for N
3
LMS and its three signed variants are then as follows:
$${\mathbf{W}}\left({n + 1} \right) = {\mathbf{W}}(n) + \eta_{m} (n){\mathbf{D}}(n){\mathbf{r}}(n)e(n)$$
(15)
$${\mathbf{W}}\left({n + 1} \right) = {\mathbf{W}}(n) + \eta_{m} (n){\mathbf{D}}(n)sign\left({{\mathbf{r}}(n)} \right)e(n)$$
(16)
$${\mathbf{W}}\left({n + 1} \right) = {\mathbf{W}}(n) + \eta_{m} (n){\mathbf{D}}(n)sign\left({{\mathbf{r}}(n)} \right)sign\left({e(n)} \right)$$
(17)
$${\mathbf{W}}\left({n + 1} \right) = {\mathbf{W}}(n) + + \eta_{m} (n){\mathbf{D}}(n)sign\left({{\mathbf{r}}(n)} \right)sign\left({e(n)} \right)$$
(18)
Figures 3 and 4 show the convergence curves of the eN
2
LMS and the N
3
LMS algorithms and their SR, SE and SS variants. The data in these figures show that SReN
2
LMS is only slightly inferior to the eN
2
LMS-based AAC, at the cost of a reduced number of multiplications. Hence, in practical implementations, if choosing between the SReN
2
LMS and the eN
2
LMS algorithms, the SR version is preferred. Similarly, between the SRN
3
LMS and the N
3
LMS algorithms, SRN
3
LMS is slightly inferior to N
3
LMS, but uses a reduced number of multiplications. Therefore, for real-time applications, the SRN
3
LMS algorithm-based AAC can be used. N
3
LMS is slightly faster converging than eN
2
LMS, as shown in Figs. 3 and 4.