Skip to main content

Piecewise modelling and parameter estimation of repairable system failure rate

Abstract

Lifetime failure data usually presents a bathtub-shaped failure rate, and random failure phase dominates its life cycle. Relatively large errors always existed when single distribution model is used for analysis and modelling. Therefore, a bathtub-shaped reliability model based on piecewise intensity function was proposed for repairable system. Parameter estimation was studied by using maximum likelihood method in conjunction with two-stage Weibull process interval estimation and Artificial Bee Colony Algorithm. The proposed model and estimation approach fit the failure rate under bathtub curve very well, adequately reflect the duration of random failure phase, and guarantee the feasibility of quantitative analysis. Moreover, the estimated critical point of bathtub curve could be calculated. At last, the proposed model was applied in the reliability analysis of a repairable Computer Numerical Control system, and the validity of the proposed model and its parameter estimation method were verified.

Background

Reliability modelling is a process in which the distribution curve of system failure rate is fitted based on lifetime failure data generated in system life cycle test or system operation (Myers 2010). The failure rate distribution curve can reveal system’s failure mechanism or the system-specific use phase, which may be conducive to the predication and control of failure, and carrying out an available predictive maintenance, reducing unexpected failures (Lin and Tseng 2005). Furthermore, the quantitative fitting of distribution curve is also the premise that reliability analysis, design, and test are effectively carried out. Therefore, it is important to analyze the variation trend of failure rate in a quantitative way for the purpose of reliability modelling.

At present, common distribution models for fitting failure rate include exponential distribution, normal distribution, lognormal distribution and Weibull distribution, etc. Among them, exponential distribution deals with the situation where failure rate remains unchanged, and its function is relatively simple. Weibull distribution is the most widely applied since it is able to fit the progressive increase or decrease of failure rate effectively by changing parameters. Some scholars improved exponential distribution, proposed the exponential geometric distribution (Adamidis and Loukas 1998), exponentiated exponential distribution (Gupta and Kundu 1999), the exponential-Poisson distribution (Kus 2007), the exponentiated exponential-poisson distribution (Barreto-Souza and Cribari-Neto 2009), those improved models also achieve description of the progressive increasing or decreasing of failure rate. All of which have been used monotonic function to model the failure patterns for the repairable system.

However, the lifetime failure data are often non-monotonic, and have a bathtub-shaped failure rate, which can be divided into three phases—early failure, random failure, and wear-out failure. For this reason, many scholars studied the phenomenon. Chen (2000) put forward a new two-parameter Weibull distribution model, in which the failure rate function followed the trend of bathtub curve when the shape parameter satisfied a certain condition. Later, Xie et al. (2002) and Zhang (2004) expanded the two-parameter model above by introducing the third parameter, and proposed the three-parameter Weibull distribution models independently. The results indicated that those models can reach high accuracy in fitting the trend of bathtub curve more easily than traditional models. Lai et al. (2003), based on Weibull distribution and Extreme Value Type I distribution, proposed an improved Weibull distribution model, which also described the trend of bathtub curve very well. They also completed the model’s parameter estimation by means of Weibull probability plot. El-Gohary et al. (2013) gave a new distribution known as the generalized Gompertz distribution by dealing with a new generalization of exponential. Generalized Gompertz distribution, and generalized exponential distributions, which had bathtub curve failure rate depending upon the shape parameter. And the maximum likelihood estimators of the parameters were derived using a simulations study.

Wang et al. (2015) proposed a new four parameter interval life model to describe the bathtub curve. Cordeiro et al. (2014) extended the modified Weibull distribution, and established a five-parameter Weibull distribution. Aiming at the failure censored data of industrial devices, Using Newton–Raphson type algorithm for parameter estimation, finding that the extended model fitted the bathtub curve very well.

The above models fitted the trend of bathtub-shaped failure rate well, but random failure phase was not described in detail, as well as only one critical point of early failure phase and random failure phase was included. In fact, random failure phase is the main part of life cycle for most repairable system. Single distribution model may be inadequate for analysis and modelling as well as relatively errors may occur. Parameter estimation of multi parameter model, because of the unknown parameters, is difficult to solve the problem directly by using the maximum likelihood estimation.

Therefore, a bathtub-shaped reliability model based on piecewise intensity function was proposed for repairable system. According to the characteristics of failure rate at three phases of bathtub curve, using non-homogeneous Poisson process (NHPP) and homogeneous Poisson process (HPP), the failure rate of different phases was fitted respectively. In view of the complexity of parameter estimation, relevant parameter estimation was studied by virtue of maximum likelihood method and artificial bee colony algorithm. At last, the approach was applied and verified by the reliability analysis of a repairable Computer Numerical Control (CNC) system. The remainder of the paper is organized as follows. "Set up piecewise model" section builds up a piecewise model under bathtub curve. "Parameter estimation" section studies the parameter estimation of the model. "Application" section gives an example application for verifying the proposed method. Finally, the paper is concluded.

Set up piecewise model

Currently, there are two main methods for studying reliability of repairable system: Counting Process and Markov Process (Rausand and Høyland 2004). Markov Process is mainly targeted at the multi-state problem of repairable system. Counting Process focuses on two states which are normal operation and fault of the system, and record normal operating time of system, fault occurrence time as well as frequency, which is consistent with the trend of fault rate in this research. Consequently, Counting Process was selected for piecewise reliability modelling of trend of bathtub curve. Process is as follows:

  1. 1.

    Early failure phase

    At early failure phase, intensity function presents a decline trend. Due to the defects in the process of selection, manufacture and assembly of components, failure rate of system is high in the initial phase, but decreases rapidly as the run time increases and maintenance is provided. Therefore, we assume that the repair process at early failure phase is minimal repair, NHPP is used for modelling, intensity function complies with the most common Weibull process, and b 1 < 1, namely:

    $$\omega_{1} (t) = a_{1} b_{1} t^{{b_{1} - 1}}$$
    (1)
  2. 2.

    Random failure phase

    At random failure phase, early defects of a system have been corrected, and failure intensity is tending towards stability. Thus, we assume that the repair process at random failure phase is perfect repair, HPP and exponential distribution are adopted for modelling, and intensity function remains unchanged (as a constant):

    $$\omega_{2} (t) = \omega_{1} (t_{0} ) = a_{1} b_{1} t_{0}^{{b_{1} - 1}}$$
    (2)

    where, t 0 is the critical point between early failure phase and random failure phase.

  3. 3.

    Wear-out failure phase

    As run time lapses, the system slowly enters wear-out failure phase due to loss of components and mechanical fatigue, and its intensity function is increasing. Just like early failure phase, we assume that the repair process at wear-out failure phase is minimal repair, NHPP is used for modelling, and intensity function complies with Weibull process. Thus, the system’s intensity function is:

    $$\omega_{3} (t) = a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (t - t_{1} )^{{b_{2} - 1}}$$
    (3)

    where, t 1 is the critical point between random failure phase and wear-out failure phase.

To sum up, the system’s intensity function throughout the life cycle is:

$$\omega (t) = \left\{ {\begin{array}{*{20}l} {a_{1} b_{1} t^{{b_{1} - 1}} } \hfill & \quad {t \le t_{0} } \hfill \\ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} } \hfill & \quad {t_{0} < t \le t_{1} } \hfill \\ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (t - t_{1} )^{{b_{2} - 1}} } \hfill & \quad {t > t_{1} } \hfill \\ \end{array} } \right.$$
(4)

Then, the system’s cumulative intensity function is:

$$W_{c} (t) = \int_{0}^{t} {\omega (u)du} = \left\{ {\begin{array}{*{20}l} {a_{1} t^{{b_{1} }} } \hfill & \quad {t \le t_{0} } \hfill \\ {a_{1} t_{0}^{{b_{1} }} + a_{1} b_{1} t_{0}^{{b_{1} - 1}} (t - t_{0} )} \hfill & \quad {t_{0} < t \le t_{1} } \hfill \\ {a_{1} t_{0}^{{b_{1} }} + a_{1} b_{1} t_{0}^{{b_{1} - 1}} (t - t_{0} ) + a_{2} (t - t_{1} )^{{b_{2} }} } \hfill & \quad {t > t_{1} } \hfill \\ \end{array} } \right.$$
(5)

Parameter estimation

There are a lot of methods for parameter estimation of reliability model, such as maximum likelihood estimation, moment estimation, least square method, and Bayesian method. Among these methods, maximum likelihood estimation is the most widely used owing to its good theoretical basis and high estimation accuracy. Therefore, maximum likelihood estimation was applied for parameter estimation of the proposed reliability model.

Maximum likelihood estimation

First, the likelihood function shall be determined. Based on the reliability model above, the cumulative distribution function of the ith Mean Time Between Failure (MTBF) is:

$$\begin{aligned} F(S_{i} \left| {S_{i - 1} } \right.) & = \frac{{F(S_{i} ) - F(S_{i - 1} )}}{{1 - F(S_{i - 1} )}} \\ & = \frac{{\exp [ - W(S_{i - 1} )] - \exp [ - W(S_{i} )]}}{{\exp [ - W(S_{i - 1} )]}} \\ & = 1 - \exp [ - W(S_{i} ) + W(S_{i - 1} )] \\ \end{aligned}$$
(6)

where, S i is the moment when the ith failure occurs. Then, the conditional probability density function of S i is:

$$f\left( {S_{i} \left| {S_{i - 1} } \right.} \right) = \omega (S_{i} )\exp \left[ { - W(S_{i} ) + W(S_{i - 1} )} \right]$$
(7)

Equations (4) and (5) are plugged into (7), and we obtain: When S i  ≤ t 0,

$$f(S_{i} \left| {S_{i - 1} } \right.) = a_{1} b_{1} S_{i}^{{b_{1} - 1}} \exp \left( { - a_{1} S_{i}^{{b_{1} }} + a_{1} S_{i - 1}^{{b_{1} }} } \right)$$
(8)

When S i−1 < t 0 ≤ S i  < t 1,

$$f(S_{i} \left| {S_{i - 1} } \right.) = a_{1} b_{1} t_{0}^{{b_{1} - 1}} \exp \left[ { - a_{1} (1 - b_{1} )t_{0}^{{b_{1} }} - a_{1} b_{1} t_{0}^{{b_{1} - 1}} S_{i} + a_{1} S_{i - 1}^{{b_{1} }} } \right]$$
(9)

When t 0 < S i−1 < S i  ≤ t 1,

$$f(S_{i} \left| {S_{i - 1} } \right.) = a_{1} b_{1} t_{0}^{{b_{1} - 1}} \exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} )} \right]$$
(10)

When S i −1 < t 1 < S i ,

$$f(S_{i} \left| {S_{i - 1} } \right.) = \left[ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (S_{i} - t_{1} )^{{b_{2} - 1}} } \right]\exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} ) - a_{2} (S_{i} - t_{1} )^{{b_{2} }} } \right]$$
(11)

When t 1 < S i-1 < S i ,

$$\begin{aligned} f(S_{i} \left| {S_{i - 1} } \right.) & = \left[ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (S_{i} - t_{1} )^{{b_{2} - 1}} } \right] \\ & \quad \times \exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} ) \, - a_{2} (S_{i} - t_{1} )^{{b_{2} }} + a_{2} (S_{i - 1} - t_{1} )^{{b_{2} }} } \right] \\ \end{aligned}$$
(12)

When the system’s failure time data is Type-II censored data, the likelihood function is:

$$\begin{aligned} L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) & = \prod\limits_{i = 1}^{n} {f(S_{i} \left| {S_{i - 1} } \right.)} \\ & = \prod\limits_{i = 1}^{k} {a_{1} b_{1} S_{i}^{{b_{1} - 1}} \exp \left( { - a_{1} S_{i}^{{b_{1} }} + a_{1} S_{i - 1}^{{b_{1} }} } \right)} \times a_{1} b_{1} t_{0}^{{b_{1} - 1}} \\ & \quad \times \exp \left[ { - a_{1} (1 - b_{1} )t_{0}^{{b_{1} }} - a_{1} b_{1} t_{0}^{{b_{1} - 1}} S_{k + 1} + a_{1} S_{k}^{{b_{1} }} } \right] \\ & \quad \times \prod\limits_{i = k + 2}^{l} {a_{1} b_{1} t_{0}^{{b_{1} - 1}} \exp [ - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} )]} \\ & \quad \times \left[ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (S_{l + 1} - t_{1} )^{{b_{2} - 1}} } \right] \\ & \quad \times \exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{l + 1} - S_{l} ) - a_{2} (S_{l + 1} - t_{1} )^{{b_{2} }} } \right] \\ & \quad \times \prod\limits_{i = l + 2}^{n} {\left\{ {\left[ {a_{1} b_{1} t_{0}^{{b_{1} - 1}} + a_{2} b_{2} (S_{i} - t_{1} )^{{b_{2} - 1}} } \right]} \right.} \\ & \quad \left. { \times \exp \left[ { - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (S_{i} - S_{i - 1} ) - a_{2} (S_{i} - t_{1} )^{{b_{2} }} + a_{2} (S_{i - 1} - t_{1} )^{{b_{2} }} } \right]} \right\} \\ \end{aligned}$$
(13)

where n is the total number of failures, θ = (a1, b1, t0, a2, b2, t1, ρ), S k  < t 0 < S k+1 < < S l  < t 1 < S l+1.

When the system’s failure time data is Type-I censored data, the likelihood function is:

$$L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) = \prod\limits_{i = 1}^{n} {f(S_{i} \left| {S_{i - 1} } \right.)} R(t_{s} \left| {S_{n} } \right.)$$
(14)

Where t s is censoring time,

$$\begin{aligned} R(t_{s} \left| {S_{n} } \right.) & = \exp [ - W_{c} (t_{s} ) + W_{c} (S_{n} )] \\ & = \exp [ - a_{1} b_{1} t_{0}^{{b_{1} - 1}} (t_{s} - S_{n} ) - a_{2} (t_{s} - t_{1} )^{{b_{2} }} + a_{2} (S_{n} - t_{1} )^{{b_{2} }} ] \\ \end{aligned}$$
(15)

To sum up, the likelihood function is:

$$L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) = \prod\limits_{i = 1}^{n} {f(S_{i} \left| {S_{i - 1} } \right.)} \delta R(t_{s} \left| {S_{n} } \right.)$$
(16)

Where δ is defined as:

$$\delta = \left\{ {\begin{array}{*{20}l} 0 \hfill & \quad {{\text{Type-II}}\,{\text{censored}}\,{\text{data}}} \hfill \\ 1 \hfill & \quad {{\text{Type I}}\,{\text{censored}}\,{\text{data}}} \hfill \\ \end{array} } \right.$$
(17)

Given that the specific equation of likelihood function is related to the interval of failure time data where t 0 and t 1 are located, and it is difficult to solve the complicated equation of likelihood function by calculating logarithmic derivative. Hence, artificial bee colony algorithm was used for parameter optimization to maximize likelihood in accordance with the piecewise processing of t 0 and t 1. The optimization model is as follows:

$$\begin{aligned} & \mathop{\hbox{max} }\limits_{\theta } \quad L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) \\ & {\text{subject}}\,{\text{to}}\;\left\{ {\begin{array}{*{20}l} {a_{1} > 0,0 < b_{1} \le 1} \hfill \\ {a_{2} > 0,b_{2} \ge 1} \hfill \\ {0 < t_{0} < t_{1} < S_{n} } \hfill \\ \end{array} } \right. \\ \end{aligned}$$
(18)

In addition, the range of variable parameter shall be optimized to improve efficiency and accuracy of the model’s parameter optimization. First, the value range of t 0 and t 1 , i.e. k and l, shall be determined based on the system’s failure time data and trend estimation. Second, the first k time data at early failure phase and the last nl time data at wear-out failure phase are adopted for interval estimation of Weibull process, and the extreme value of estimation interval is regarded as the optimization range of parameters a 1 , b 1 , a 2, and b 2.

The flowchart of parameter estimation of reliability model is shown in Fig. 1.

Fig. 1
figure 1

Flowchart of parameter estimation of reliability model

Weibull process interval estimation

Interval estimation includes two parts—point estimation and margin of error that describes estimation accuracy. Therefore, to complete the interval estimation of Weibull process, the point estimation of parameters shall be first performed, followed by the calculation of its margin of error.

  1. 1.

    Point estimation

    Based on Literature (Ebeling 2004), the maximum likelihood point estimation of Weibull process parameters a and b is as follows:

    When the failure time data is Type-II censored data:

    $$\left\{ {\begin{array}{*{20}l} {\hat{b} = \frac{n}{{\sum\nolimits_{i = 1}^{n} {\ln \frac{{S_{n} }}{{S_{i} }}} }}} \hfill \\ {\hat{a} = \frac{n}{{S_{n}^{{\hat{b}}} }}} \hfill \\ \end{array} } \right.$$
    (19)

    When the failure time data is Type-I censored data:

    $$\left\{ {\begin{array}{*{20}l} {\hat{b} = \frac{n}{{\sum\nolimits_{i = 1}^{n} {\ln \frac{{t_{s} }}{{S_{i} }}} }}} \hfill \\ {\hat{a} = \frac{n}{{t_{s}^{{\hat{b}}} }}} \hfill \\ \end{array} } \right.$$
    (20)
  2. 2.

    Margin of error

    Generally, in the case of large sample (≥30), the maximum likelihood point estimation presents consistency and asymptotically normal distribution; In the case of small sample, the logarithm of point estimation of parameter is much closer to normal distribution. Specific equations are as follows:

    In the case of large sample:

    $$\frac{{a - \hat{a}}}{{\sqrt {D(\hat{a})} }} \sim N(0,1);\quad \frac{{b - \hat{b}}}{{\sqrt {D(\hat{b})} }} \sim N(0,1)$$
    (21)

    In the case of small sample:

    $$\frac{{\ln a - \ln \hat{a}}}{{\sqrt {D(\hat{a})} }} \sim N(0,1);\quad \frac{{\ln b - \ln \hat{b}}}{{\sqrt {D(\hat{b})} }} \sim N(0,1)$$
    (22)

    where \(D(\hat{a})\) and \(D(\hat{b})\) are the variance of parameters a and b, respectively. Their value can be obtained using Fisher Information Matrix below (Kijima 1989; Ye 2003):

    $$\left( {\begin{array}{*{20}l} {D(\hat{a})} \hfill & \quad {\text{cov} (\hat{a},\hat{b})} \hfill \\ {\text{cov} (\hat{b},\hat{a})} \hfill & \quad {D(\hat{b})} \hfill \\ \end{array} } \right) = \left( {\begin{array}{*{20}l} { - \frac{{\partial^{2} \ln L}}{{\partial a^{2} }}} \hfill & \quad { - \frac{{\partial^{2} \ln L}}{\partial a\partial b}} \hfill \\ { - \frac{{\partial^{2} \ln L}}{\partial b\partial a}} \hfill & \quad { - \frac{{\partial^{2} \ln L}}{{\partial b^{2} }}} \hfill \\ \end{array} } \right)^{ - 1}$$
    (23)

According to normal distribution, the confidence interval under 1−α:

In the case of large sample:

$$ P\left( {\frac{{\left| {a - \hat{a}} \right|}}{{\sqrt {D(\hat{a})} }} \le Z_{{\alpha /2}} } \right) = 1 - \alpha ;\quad P\left( {\frac{{\left| {b - \hat{b}} \right|}}{{\sqrt {D(\hat{b})} }} \le Z_{{\alpha /2}} } \right) = 1 - \alpha $$
(24)

In the case of small sample:

$$ P\left( {\frac{{\left| {\ln a - \ln \hat{a}} \right|}}{{\sqrt {D(\hat{a})} }} \le Z_{{\alpha /2}} } \right) = 1 - \alpha ;\quad P\left( {\frac{{\left| {\ln b - \ln \hat{b}} \right|}}{{\sqrt {D(\hat{b})} }} \le Z_{{\alpha /2}} } \right) = 1 - \alpha $$
(25)

Solving Eq. (25) can obtain the confidence interval for a and b:

In the case of large sample:

$$ \left[ {\hat{a} - Z_{{\alpha /2}} \sqrt {D(\hat{a})} , \, \hat{a} + Z_{{\alpha /2}} \sqrt {D(\hat{a})} } \right];\quad \left[ {\hat{b} - Z_{{\alpha /2}} \sqrt {D(\hat{b})} , \, \hat{b} + Z_{{\alpha /2}} \sqrt {D(\hat{b})} } \right] $$
(26)

In the case of small sample:

$$ \left[ {\hat{a}/\exp \left( {Z_{{\alpha /2}} \sqrt {D(\hat{a})} } \right),\,\hat{a} \cdot \exp \left( {Z_{{\alpha /2}} \sqrt {D(\hat{a})} } \right)} \right];\quad \left[ {\hat{b}/\exp \left( {Z_{{\alpha /2}} \sqrt {D(\hat{b})} } \right),\,\hat{b} \cdot \exp \left( {Z_{{\alpha /2}} \sqrt {D(\hat{b})} } \right)} \right] $$
(27)

Parameter optimization based on artificial bee colony algorithm

With respect to the proposed model for parameter optimization and the range optimized by Weibull process interval estimation, artificial bee colony algorithm was used for solving optimization problem. Artificial bee colony algorithm is characterized by strong global optimization, rapid rate of convergence, and suitability in solving different problems, compared with other swarm intelligence optimization algorithms such as evolutionary algorithm, artificial immune algorithm, particle swarm optimization, and ant colony algorithm (Chen 2015).

The correspondence between bee’s search for nectar source and optimization in artificial bee colony algorithm is shown in Table 1.

Table 1 Correspondence between bee’s behavior and optimization

The algorithm process is as follows:

  1. 1.

    Initialization of parameter

    First, the two basic parameters in artificial bee colony algorithm shall be initialized. One parameter is the quantity of nectar source (S n ), which represents the number of solution. Besides, S n is also the number of leaders and followers; the other parameter is the limit value of gathering nectar source (limit). When the gathering times of a nectar source exceed the limit, the nectar source will be abandoned.

  2. 2.

    Initialization of solution space

    Random initialization produces the primary S n solutions. Let θ i  = (a 1i , b 1i , t 0i , a 2i , b 2i , t 1i ) =  i1 , θ i2 , θ i3 , θ i4 , θ i5 , θ i6 ), which represents the location of the ith nectar source. The initialization formula for each value is as follows:

    $$\theta_{ij} = LB_{j} + (UB_{j} - LB_{j} ) \times r\quad j = 1,2, \ldots ,4\quad i = 1,2, \ldots ,Sn$$
    (28)

    where r is the random number between [0, 1], and LB j and UB j are the value range of θ j , i.e. minimum and maximum.

    Later, the leaders start to gather these nectar sources at random, and relevant fitness value is calculated.

  3. 3.

    Stage of leaders

    When the leaders decide to gather a nectar source at random, they will search new nectar sources at random around the nectar source. Their search is in line with the equation below:

    $$\theta_{new(j)} = \theta_{ij} + (\theta_{kj} - \theta_{ij} ) \times r\quad k \in (1,2, \ldots ,Sn) \cap k \ne i\quad j = 1,2, \ldots ,4$$
    (29)

    The fitness value corresponding to new nectar sources θ new is then calculated and compared with previous nectar sources to select the solution with higher fitness.

  4. 4.

    Stage of followers

    In terms of followers, their follow probability is calculated based on normalization and fitness value of each nectar source. The equation is as follows:

    $$P_{i} = \frac{{fit_{i} }}{{\sum\nolimits_{i = 1}^{Sn} {fit_{i} } }}$$
    (30)

    where, fit i represents the fitness value of nectar source at θ i . Larger fit i indicates greater probability that the followers select the nectar source. When the followers select a nectar source, they also search new nectar sources at random around the nectar source based on Eq. (29) just as the leaders do, and then choose a better nectar source by comparing with the nectar source they follow.

  5. 5.

    Stage of scouters

    At this stage, if a nectar source θ i is still not improved after being gathered limit times, it will be abandoned. The leaders here will become scouters, and search a new nectar source at random based on Eq. (28).

  6. 6.

    Iterations

    The nectar source with the maximum fitness is recorded. Whether iterations reach the maximum set point is judged. If the maximum set point is reached, the algorithm ends and the optimal nectar source, namely optimal solution, is output. Otherwise, return to (3) to continue the cycle.

Application

The proposed model was applied in reliability analysis of a repairable CNC system containing servo drive unit.

In order to test the reliability of CNC system, a long-term multi-sample test (Type-I censored) under the same environment was carried out, and the environment of test laboratory as shown in Fig. 2. This test has been taken to record the failures of CNC system in the past two years. The failure time data of 18 sets of CNC system with 50 failures were recorded, and processed by the Total Test Time method (Barlow and Campo 1975), as shown in Table 2.

Fig. 2
figure 2

The environment of CNC system reliability test laboratory

Table 2 Data of total failure time

Based on the data above, the system’s failure trend estimation and test were carried out, and TTT diagram (Bergman 1979) was drawn, as shown in Fig. 3. Where, the total number of failures n = 50.

Fig. 3
figure 3

TTT diagram of system failure time data

As can be seen in Fig. 3, the TTT scatter diagram is concave under the diagonal of unit square at the beginning, indicating that the failure rate presents a decline at early phase. Later, the diagram fluctuates slightly along a straight line. The failure rate remains unchanged, and the system is gradually stabilized. After a period of time, the diagram looks like a convex under the diagonal of unit square, indicating that the failure rate increases.

Meanwhile, Laplace test (Louit et al. 2009) and Anderson–Darling test (Pulcini 2001) were used for trend test of failure data in Table 2. The significance level was set as α = 0.05. The results of statistical tests are shown in Table 3.

Table 3 Results of statistical tests at the significance level α = 0.05

The results of trend test are consistent with the estimation output of TTT diagram. The system failure time data follows the trend of bathtub curve, and undergoes three phases—early failure, random failure and wear-out failure.

Then, the system reliability model was set up using the piecewise function above, and parameter estimation was performed.

First, the optimum ranges of t 0 and t 1 were processed in segment, and within the scope of (0, n) select the values of k and l in turn. But in order to improve calculation efficiency, by observing the total failure time data and the inflection point trend of TTT diagram, the range of values for k and l can be reduced and make a preliminary judgment. So, the value of k may be 6, 7, 8, and 9, while the value of l may be 33, 34, 35, and 36. And, the value range of t 0 and t 1 is S k  < t 0 < S k+1 < S l  < t 1 < S l+1.

Second, parameter optimization was conducted in accordance with different k and l. Based on the failure time sequence S 1 , S 2 , S 3 ,…, S k (considered as Type-II censored data), interval estimation of minimal repair model at early failure phase was conducted. In the condition that the given confidence was 99.73 %, the confidence interval was a 1 [a 1min , a 1max ], b 1 [b 1min , b 1max ]. Similarly, based on the failure time sequence S l+1 ,…, S 50 , t s (considered as Type-I censored data), interval estimation of minimal repair model at wear-out failure phase was conducted. In the condition that the given confidence was 99.73 %, the confidence interval was a 2 [a 2min , a 2max ], b 2 [b 2min , b 2max ]. The improved optimization model was:

$$\begin{aligned} & \mathop {\text{max} }\limits_{\theta } L(S_{1} ,S_{2} , \ldots ,S_{n} \left| \theta \right.) \\ & {\text{subject}}\,{\text{to}}\,\left\{ {\begin{array}{*{20}l} {a_{1{\text{min}} } \le a_{1} \le a_{1{\text{max}} } ,b_{1{\text{min}} } \le b_{1} \le b_{1{\text{max}} } } \hfill \\ {a_{2{\text{min}} } \le a_{2} \le a_{2{\text{max}} } ,b_{2{\text{min}} } \le b_{2} \le b_{2{\text{max}} } } \hfill \\ {S_{k} < t_{0} < S_{k + 1} ,S_{l} < t_{1} < S_{l + 1} } \hfill \\ \end{array} } \right. \\ \end{aligned}$$
(31)

Artificial bee colony algorithm was then used for parameter optimization of the optimization model above. The algorithm’s iterations, size of bee colony, and limit were set. The optimal parameter in the case of k and l was θ = (a 1 , b 1 , t 0 , a 2 , b 2 , t 1 ).

At last, optimal parameters in the case of different k and l were compared. The parameter that maximizes likelihood was selected as the optimal parameter of overall model, i.e. k = 7, l = 33, a 1 = 0.0427, b 1 = 0.5066, t 0 = 16,567, a 2 = 5.0503 × 10–11, b 2 = 2.4984, t 1 = 168,242.

The goodness of fit of the two-stage Weibull process above was tested using Cramer-Von Mises test (Ebeling 2004). The given significance level was α = 0.05. The test results are shown in Table 4.

Table 4 Results of test of goodness of fit at the significance level α = 0.05

In conclusion, 18 sets of CNC system intensity functions were obtained:

$$\omega (t) = \left\{ {\begin{array}{*{20}l} {0.0216 \times t^{ - 0.4934} } \hfill & \quad {t \le 16567} \hfill \\ { 1. 7 8 9 3\times 1 0^{ - 4} } \hfill & \quad { 16567< t \le 168242} \hfill \\ { [ 1.7893\times 1 0^{ - 4} + 1.2618 \times 10^{ - 10} \times (t - 168242)^{1.4984} } \hfill & \quad {t > 168242} \hfill \\ \end{array} } \right.$$
(32)

Meanwhile, the results suggest that the critical point between early failure phase and random failure phase of a single system was located at about t 0/18 = 920 h, while the critical point between random failure phase and wear-out failure phase was located at about t 1/18 = 9347 h.

The intensity function of a single system was:

$$\omega_{1} (t) = \left\{ {\begin{array}{*{20}l} {5.1882 \times 10^{ - 3} \times t^{ - 0.4934} } \hfill & \quad {t \le 920} \hfill \\ { 1.7893\times 1 0^{ - 4} } \hfill & \quad { 920< t \le 9347} \hfill \\ { [ 1.7893\times 1 0^{ - 4} + 9.5916 \times 10^{ - 9} \times (t - 9347)^{1.4984} } \hfill & \quad {t > 9347} \hfill \\ \end{array} } \right.$$
(33)

The curve of intensity function is shown in Fig. 4.

Fig. 4
figure 4

Intensity function of a single system

Based the instantaneous MTBF(t) = 1/ω(t), we found that the MTBF of the system at random failure phase was calculated as MTBF = 1/(1.7893 × 10−4) = 5589 h.

Therefore, the example above proves that the proposed piecewise reliability model can fit the failure rate under bathtub curve very well and obtain two critical points of bathtub curve. It provides an important basis for reliability analysis, design and test.

Conclusions

A modelling study on the phenomenon that the failure rate of lifetime failure data presents a bathtub curve was carried out for repairable system. The following achievements were made:

  1. 1.

    Considering the characteristics of failure rate at three phases of bathtub curve, and combining homogeneous Poisson process and non-homogeneous Poisson process during Counting Process, a bathtub-shaped reliability model based on piecewise intensity function was proposed. Moreover, with respect to the difficulty of common parameter estimation methods in solving equations, maximum likelihood estimation and artificial bee colony algorithm were combined to estimate the model-related parameters and guarantee the feasibility of the model in application.

  2. 2.

    Given that extensive range and excessive time consumption of parameter optimization during parameter estimation, the interval estimation of two-stage Weibull process was studied, and the range of model optimization was improved. As a result, the efficiency of parameter optimization was substantially increased.

  3. 3.

    The proposed model, as well as estimation approach, fits the failure rate under bathtub curve very well and adequately reflects the duration of random failure phase. Besides, two critical points of bathtub curve are obtained.

  4. 4.

    At last, the practical use of the proposed model is demonstrated by a set of failure data of a repairable CNC system, and the validity of the proposed model and its parameter estimation method was verified. The study will be useful for reliability analysis of repairable systems and provide an important basis for relevant reliability research.

References

  • Adamidis K, Loukas S (1998) A lifetime distribution with decreasing failure rate. Stat Prob Lett 39(98):35–42

    Article  Google Scholar 

  • Barlow RE, Campo RA (1975) Total time on test processes and applications to failure data analysis. California Univ Berkeley Operations Research Center

  • Barreto-Souza W, Cribari-Neto F (2009) A generalization of the exponential-poisson distribution. Stat Prob Lett 79(24):2493–2500

    Article  Google Scholar 

  • Bergman B (1979) On age replacement and the total time on test concept. Scand J Stat 6(4):161–168

    Google Scholar 

  • Chen Z (2000) A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Stat Prob Lett 49(2):155–161

    Article  Google Scholar 

  • Chen W (2015) Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem. Phys A 429:125–139

    Article  Google Scholar 

  • Cordeiro GM, Ortega EMM, Silva GO (2014) The Kumaraswamy modified Weibull distribution:theory and applications. J Stat Comput Simul 84(84):1387–1411

    Article  Google Scholar 

  • Ebeling CE (2004) An introduction to reliability and maintainability engineering. Tata McGraw-Hill Education, New York, pp 294–307

  • El-Gohary A, Alshamrani A, Al-Otaibi AN (2013) The generalized Gompertz distribution. Appl Math Model 37(1–2):13–24

    Article  Google Scholar 

  • Gupta RD, Kundu D (1999). Generalized exponential distributions. Aust N Zeal J Stat 41(12):173-188(16)

  • Kijima M (1989) Some results for repairable systems with general repair. J Appl Prob 26(1):89–102

    Article  Google Scholar 

  • Kus C (2007) A new lifetime distribution. Gen Inf 51(9):4497–4509

    Google Scholar 

  • Lai CD, Xie M, Murthy DNP (2003) A modified Weibull distribution. IEEE Trans Reliab 52(1):33–37

    Article  Google Scholar 

  • Lin Chang-Ching, Tseng Hsien-Yu (2005) A neural network application for reliability modelling and condition-based predictive maintenance. Int J Adv Manuf Technol 25:174–179

    Article  Google Scholar 

  • Louit DM, Pascual R, Jardine AKS (2009) A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data. Reliab Eng Syst Safety 94(10):1618–1628

    Article  Google Scholar 

  • Myers A (2010) Complex system reliability: multichannel systems with imperfect fault coverage. Springer, London, pp 7–9

    Google Scholar 

  • Pulcini G (2001) Modeling the failure data of a repairable equipment with bathtub type failure intensity. Reliab Eng Syst Safety 71(2):209–218

    Article  Google Scholar 

  • Rausand M, Høyland A (2004) System reliability theory: models, statistical methods, and applications. Wiley, New York, pp 127–171

    Google Scholar 

  • Wang X, Yu C, Li Y (2015) A new finite interval lifetime distribution model forfitting bathtub-shaped failure rate curve. Math Problems Eng 2015:1–6

    Google Scholar 

  • Xie M, Tang Y, Goh TN (2002) A modified Weibull extension with bathtub-shaped failure rate function. Reliab Eng Syst Safety 76(3):279–285

    Article  Google Scholar 

  • Ye CN (2003) Fisher information matrix of LBVW distribution. Acta Math Appl Sinica 26(4):715–725

    Google Scholar 

  • Zhang R (2004) A new three-parameter lifetime distribution with bathtub shape or increasing failure rate function and a flood data application. Concordia University, Canada

Download references

Authors’ contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Acknowledgements

The research was supported by the National Science and Technology Major Project “High-Grade CNC Machine Tools and Basic Manufacturing Equipments” (Grant No. 2016ZX04004-006).

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chong Peng.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, C., Liu, G. & Wang, L. Piecewise modelling and parameter estimation of repairable system failure rate. SpringerPlus 5, 1477 (2016). https://doi.org/10.1186/s40064-016-3122-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40064-016-3122-4

Keywords