Open Access

Extreme learning machine: a new alternative for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters

  • Zhijian Liu1,
  • Hao Li2,
  • Xindong Tang3,
  • Xinyu Zhang4,
  • Fan Lin5Email author and
  • Kewei Cheng6
Contributed equally
SpringerPlus20165:626

https://doi.org/10.1186/s40064-016-2242-1

Received: 5 January 2016

Accepted: 27 April 2016

Published: 14 May 2016

Abstract

Background

Heat collection rate and heat loss coefficient are crucial indicators for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, the direct determination requires complex detection devices and a series of standard experiments, wasting too much time and manpower.

Findings

To address this problem, we previously used artificial neural networks and support vector machine to develop precise knowledge-based models for predicting the heat collection rates and heat loss coefficients of water-in-glass evacuated tube solar water heaters, setting the properties measured by “portable test instruments” as the independent variables. A robust software for determination was also developed. However, in previous results, the prediction accuracy of heat loss coefficients can still be improved compared to those of heat collection rates. Also, in practical applications, even a small reduction in root mean square errors (RMSEs) can sometimes significantly improve the evaluation and business processes.

Conclusions

As a further study, in this short report, we show that using a novel and fast machine learning algorithm—extreme learning machine can generate better predicted results for heat loss coefficient, which reduces the average RMSEs to 0.67 in testing.

Keywords

Water-in-glass evacuated tube solar water heaters Portable test instruments Heat collection rate Heat loss coefficient Extreme learning machine

Background

Solar water heaters (SWHs) are powerful and popular techniques to make use of solar energy, which typically use solar collectors and concentrators to gather, store, and use solar radiation to heat air or water in domestic, commercial, or industrial plants (Mekhile et al. 2011). As one of the most important types of stationary collector, evacuated tube solar collectors have substantially lower heat loss coefficient and cost than standard flat plate collectors (Kalogirou 2003; Morrison et al. 1984). In Chinese area, all-glass evacuated tubular solar water heaters are widely used due to their excellent thermal performance, convenient installation, and easy transportability (Shah and Furbo 2007; Liu et al. 2013). In recent years, there are many research groups that focus on the theoretical and experimental studies of the thermal performance of water-in-glass evacuated tube solar water heaters (Pei et al. 2012; Lin et al. 2012; Çomaklı et al. 2012; Govind et al. 2009).

However, even though we have a testing standard in China (GB/T 4271-2007, Test methods for the thermal performance of solar collectors), there is still few references that show the improved measurements of heat collection rate and heat loss coefficient for solar water heaters, which is a crucial problem for scientists and technicians when evaluating the in service solar water heaters. To solve this problem, we first used “portable test instruments” (Liu et al. 2015a, b), which includes digital thermoelectric thermometer, electric platform scale and taper ZSH-3, to measure the basic properties of water-in-glass evacuated tube solar water heaters. Based on the 915 data groups, artificial neural networks (ANNs) and support vector machine (SVM) were successfully proved to be efficient and precise for predicting the heat collection rates and heat loss coefficients in testing set (Liu et al. 2015a). Compared to conventional measurements, knowledge-based machine learning method is much faster and convenient, saving time, resources and manpower (Liu et al. 2015a). The flow chart of the novel measurement is shown in Fig. 1. To provide a more user-friendly method, the WaterHeater, a software based on back-propagation (BP) algorithm in both personal computer (PC) and Android platforms were developed (Liu et al. 2015b). However, in spite of these progresses, there still remains a question that needed to be solved: given that the lowest average RMSEs for the prediction of heat loss coefficient (0.73 in SVM, 0.71 in ANN) is still relatively higher than those of heat collection rates (0.29 in SVM, 0.14 in ANN), can we further improve the RMSEs when predicting the heat loss coefficients? Although the RMSEs of predicting heat loss coefficients are relatively low, which is acceptable to further applications, results show that the precision in predicting the heat loss coefficients can still be improved because their RMSEs in testing is still higher than those of predicted heat collection rates (Liu et al. 2015a, b). Also, in practical applications, even a slight reduction in RMSEs will be considered as significant improvements. To further solve this problem, in this short report, we show that ELM has a lower RMSE for predicting the heat loss coefficients of water-in-glass evacuated tube solar water heaters. Here, ELMs were trained by setting the properties measured by “portable test instruments” as the independent variables. 915 data groups acquired experimentally were used for model training and testing. Comparisons were made between ELMs and our previous models.
Fig. 1

Flow chart of the novel method using “portable test instruments” combined with machine learning models for determining heat collection rate and heat loss coefficient (Liu et al. 2015a)

Experimental

Measurement

In this research, 915 water-in-glass evacuated tube solar water heaters (in service for 1 year) were determined by the “portable test instruments” and the PDT2013-1 (China Academy of Building Research, Beijing, China) detection device developed by the national center for quality supervision and testing of solar heating systems (Liu et al. 2015a, b). Forty-eight PDT2013-1 detection devices were employed to measure the heat collection rate and heat loss coefficient simultaneously. The measured extrinsic properties using “portable test instruments” include tube length, number of tubes, tube center distance, hot water mass in tank, collector area, final temperature and angle between tubes and ground. Table 1 shows the statistical results of the experimental data, which has been reported in our previous work (Liu et al. 2015a, b).
Table 1

Statistic of the variables for 915 samples of in service water-in-glass evacuated tube solar water heaters (Liu et al. 2015a, b)

Item

Tube length (mm)

Number of tubes

TCD (mm)

Tank volume (kg)

Collector area (m2)

Angle (°)

Final temp. (°C)

HCR

HLC

Maximum

2200

64

151

403

8.24

85

62

11.3

13

Minimum

1600

5

60

70

1.27

30

46

6.7

8

Range

600

59

91

333

6.97

55

16

4.6

5

Average

1811

21

76.2

172

2.69

46

53

8.9

10

SD

87.8

5.8

5.11

47.0

0.73

3.89

2.0

0.48

0.77

TCD tube center distance, temp. temperature, HCR heat collection rate (MJ/m2), HLC heat loss coefficient [W/(m3K)]

Extreme learning machine (ELM)

ELM is a new single hidden layer feed-forward learning algorithm invented by Huang et al. (2004, 2006) in recent years, which has been proved to have better performances than ANNs and SVM in some scientific cases (Huang 2014). Being similar to the single-layer feed-forward neural network, the network of ELM is a linear system that the input weights and hidden node parameters are selected randomly. The output weights can be obtained by calculating the pseudo-inverse of output matrix of hidden layer. Being different to traditional neural networks, ELM does not need to learn iteratively. The basic advantages of ELM include simple structure, fast learning speed, good global search ability and generalization performance.

For an extreme learning machine with n input neurons, L hidden layer neurons and N training cases trained on a data set \((x_{i} ,t_{i} )\), the mathematical model can be described as:
$$\sum\limits_{i = 1}^{L} {\beta_{i} g\left( {a_{i} x_{j} + b_{i} } \right)} = o_{j} ,\quad j = 1, \ldots ,N\quad L \le N$$
(1)
where \(x_{i} = [x_{i1} ,x_{i2} , \ldots ,x_{in} ]^{T} \in R_{n}\); \(t_{i} = [t_{i1} ,t_{i2} , \ldots ,t_{im} ]^{T} \in R^{m}\); \(\beta_{i} = \left[ {\beta_{i1} ,\beta_{i2} , \ldots ,\beta_{iL} } \right]^{T}\) is the weight vectors of hidden layer to output layer; \(g\left( . \right)\) is the activation of hidden layer neurons; \(a_{i} = \left[ {a_{i1} ,a_{i2} , \ldots ,a_{in} } \right]^{T}\) is the weight vectors of input layer to hidden layer; \(b_{i}\) is the biases of the neuron in the ith hidden layer; and \(o_{j}\) is the output value of the jth input training sample.
If L = N, for any α and β, above model can approximate all the training samples with zero error, namely:
$$\sum\limits_{j = 1}^{N} {\left\| {o_{j} - y_{j} } \right\|} = 0$$
(2)
thus we have:
$$\sum\limits_{i = 1}^{L} {\beta_{i} g\left( {a_{i} x_{j} + b_{i} } \right)} = y_{j} ,\quad \, j = 1, \ldots ,N$$
(3)
Which can also be described as:
$$H\beta = Y$$
(4)
where \(Y = \left[ {y_{1}^{T} ,y_{2}^{T} , \ldots ,y_{N}^{T} } \right]_{N \times L}^{T}\); \(\beta = \left[ {\beta_{1}^{T} ,\beta_{2}^{T} , \ldots ,\beta_{N}^{T} } \right]_{N \times L}^{T}\); H is the output matrix of hidden layer:
$$H = \left[ {\begin{array}{*{20}c} {g\left( {a_{1} x_{1} + b_{1} } \right)} & \quad{g\left( {a_{2} x_{1} + b_{2} } \right)} & \quad\cdots & \quad{g\left( {a_{L} x_{1} + b_{L} } \right)} \\ {g\left( {a_{1} x_{2} + b_{1} } \right)} & \quad{g\left( {a_{2} x_{2} + b_{2} } \right)} &\quad \cdots &\quad {g\left( {a_{L} x_{2} + b_{L} } \right)} \\ \vdots & \quad\vdots & \quad\ddots & \quad\vdots \\ {g\left( {a_{1} x_{N} + b_{1} } \right)} & \quad{g\left( {a_{2} x_{N} + b_{2} } \right)} & \quad\cdots &\quad {g\left( {a_{L} x_{N} + b_{L} } \right)} \\ \end{array} } \right]$$
(5)
However, when the training sample is large, in order to reduce the amount of calculation, the selection of L is usually less than N. In this case, the training error of above model approximates an arbitrary value \(\varepsilon > 0\), namely:
$$\sum\limits_{j = 1}^{N} {\left\| {o_{j} - y_{j} } \right\|} < \varepsilon$$
(6)
Therefore, not all parameters of above model need to be adjusted if \(g\left( . \right)\) is infinitely differentiable, and parameters α and β can be selected normally before training and remain unchanged in the training. The weight vectors β of hidden layer to output layer can be acquired by solving the following equation set:
$$\left\| {H\hat{\beta } - T} \right\| = \mathop {\hbox{min} }\limits_{\beta } \left\| {H\beta - T} \right\|$$
(7)
The solution is \(\hat{\beta } = H^{ * } \cdot Y\), where \(H^{ * }\) is the Moore–Penrose pseudo inverse of matrix H. Thus there are only three steps in the training of an ELM:
  1. 1.

    Select the hidden layer neurons L.

     
  2. 2.

    Select an infinitely differentiable function \(g\left( . \right)\) as the activation of each hidden layer neuron, and calculate the output matrix H (Eq. 5) of hidden layer.

     
  3. 3.

    Calculate the weight vectors \(\hat{\beta }\) (\(\hat{\beta } = H^{ * } \cdot Y\)) of output layer.

     

ELM does not need to adjust too many parameters in training, and only needs to adjust the weight \(\hat{\beta }\) of hidden layer to output layer by selecting hidden layer neurons L. Compared to other existing machine learning techniques, it can acquire the global solution with very short time (Huang 2014). In this study, the data set we used includes 915 data groups, which corresponds to the precondition that fulfills Eq. 6. Therefore, the ELM is considered to be applicable.

Results and discussion

Model development

To develop ELM models, we used Matlab software with the package of basic ELM (with randomly generated hidden nodes, random neurons) developed by Huang’s research group (ELM code sources). Numbers of hidden nodes were set from 2 to 50 in order to search the best testing results with the lowest RMSEs in testing. Prediction models for heat collection rates and heat loss coefficients were developed respectively. 85 % of data groups were set as the training set. To validate the model, the rest 15 % data groups were set as the testing set, which was to test whether the model was effective in field measurements. Models were trained and tested for 20 times with different components of data groups in training and testing sets, under the same proportions of training and testing sets (85 and 15 %, respectively). The RMSEs were obtained by Eq. 8:
$${\text{RMSE}} = \sqrt {\frac{{\sum\nolimits_{i = 1}^{n} {(Z_{i} - O_{i} )^{2} } }}{{n_{tot} }}}$$
(8)
where \(Z_{i}\) is the predicted value, \(O_{i}\) is the actual value and \(n_{tot}\) is the number of tested samples.
Table 2 shows the selected results of ELMs and previous comparable models (Liu et al. 2015a). The RMSEs shown in Table 2 are the averages of 20 times of training and testing. Results show that the best ELM for heat collection rates exists in the ELM with 31 hidden nodes, with an average RMSE in testing of 0.30. The best ELM for heat loss coefficients exists in 5 hidden nodes, with an average RMSE in testing of 0.67. We can see that the ELM has very good prediction results in both heat collection rates and heat loss coefficients. More importantly, though its average RMSE in predicting heat collection rates is slightly higher than the SVM and the MLFN with 12 nodes (MLFN-12), the average RMSE in predicting heat loss coefficients are lower than previous machine learning methods, which indicates that the ELM can reduce the prediction errors of heat loss coefficients. In practical applications, even this small reduction in RMSE can sometimes significantly influence the evaluation and business of solar water heaters. Therefore, the ELM can be rationally considered as a good alternative machine learning model for predicting heat collection rates and heat loss coefficients for water-in-glass evacuated tube solar water heaters. For further discussions, two representative testing results are shown in Fig. 2, which indicates that the predicted values of heat collection rates using ELM is in good agreement with their actual values (Fig. 2a). Though there is a deviation in the prediction of heat loss coefficient (Fig. 2b) when the actual values are lower or higher than 10 W/(m3K), the deviation belongs to the normal data feature of heat loss coefficients because most of the heat loss coefficients of water-in-glass evacuated tube solar water heaters are around 10 W/(m3K).
Table 2

Prediction results of ELMs and previous machine learning models for heat collection rates and heat loss coefficients

Model

Property predicted

Average RMSE in testing

ELM (31 nodes)

Heat collection rate

0.30

SVMa

Heat collection rate

0.29

GRNNa

Heat collection rate

0.33

MLFN (12 nodes)a

Heat collection rate

0.14

ELM (5 nodes)

Heat loss coefficient

0.67

SVMa

Heat loss coefficient

0.73

GRNNa

Heat loss coefficient

0.71

MLFN (6 nodes)a

Heat loss coefficient

0.73

aThese results were extracted from Liu et al. (2015a)

Fig. 2

Prediction results for a heat collection rates and b heat loss coefficients using ELMs

Conclusions

In practical applications of water-in-glass evacuated tube solar water heaters, the reduction of RMSEs, even slight, for predicting heat loss coefficients is a crucial improvement. This is because in practical productions and measurements, technicians usually need to deal with more than thousands of water heaters in a production period, and therefore a slight increase of prediction accuracy rate may help us avoid a decent number of measurement mistakes. To generate improvements for predicting the heat loss coefficients, this short report presents an alternative method for measuring heat collection rates and heat loss coefficients of water-in-glass evacuated tube solar water heaters using ELM. Results show that the ELMs have decent prediction results for heat collection rates and heat loss coefficients, compared to previous study using SVM, GRNN and MLFNs. This study shows that using Matlab software with the package of basic ELM can give precise predicted results of heat collection rates and heat loss coefficients with very convenient model development processes. Also, according to the algorithm of the ELM, it can dramatically reduce the required training time (Huang et al. 2004, 2006; Huang 2014), which are usually seen as one of the important advantages for practical applications. In future studies, we’ll focus on developing a completed system with the combination of different machine learning models, taking both RMSEs and required training times into further considerations.

Notes

Abbreviations

ELM: 

extreme learning machine

ANNs: 

artificial neural networks

SWH: 

solar water heaters

RMSE: 

root mean square error

SVM: 

support vector machine

TCD: 

tube center distance

Temp.: 

temperature (°C)

HCR: 

heat collection rate (MJ/m2)

HLC: 

heat Loss Coefficient [W/(m3K)]

Z i

the predicted value [MJ/m2 or W/(m3K)]

O i

the actual value [MJ/m2 or W/(m3K)]

n tot

the number of tested samples (no unit)

MLFN: 

multilayer feed-forward neural network

GRNN: 

general regression neural network

Declarations

Authors’ contributions

ZL, XZ and GJ measured the data. HL, XT and KC developed the machine learning models. All the authors participated in the manuscript writing. This work was finished before Mr. HL entered The University of Texas at Austin. All authors read and approved the final manuscript.

Acknowledgements

This work was supported by the Fundamental Research Funds for the Central Universities No. 2015MS108.

Competing interests

The authors declare that they have no competing interests.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Power Engineering, School of Energy, Power and Mechanical Engineering, North China Electric Power University
(2)
College of Chemistry, Sichuan University
(3)
College of Mathematics, Sichuan University
(4)
National Center for Quality Supervision and Testing of Solar Heating Systems (Beijing), China Academy of Building Research
(5)
School of Software, Xiamen University
(6)
School of Computing, Informatics, Decision Systems Engineering (CIDSE), Ira A. Fulton Schools of Engineering, Arizona State University

References

  1. Çomaklı K, Çakır U, Kaya M, Bakirci K (2012) The relation of collector and storage tank size in solar heating systems. Energy Convers Manag 63:112–117View ArticleGoogle Scholar
  2. GB/T 4271-2007 (2011) Test methods for the thermal performance of solar collectors. Standard Press, Beijing (in Chinese) Google Scholar
  3. Govind NK, Shireesh BK, Satanu B (2009) Optimization of solar water heating system through water replenishment. Energy Convers Manag 50:837–846View ArticleGoogle Scholar
  4. Huang GB (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cognit Comput 6:376–390View ArticleGoogle Scholar
  5. Huang GB, Zhu QY, Siew CK (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of IEEE international joint conference on neural networks, 2004, vol 2Google Scholar
  6. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501View ArticleGoogle Scholar
  7. Kalogirou S (2003) The potential of solar industrial process heat applications. Appl Energy 76:337–361View ArticleGoogle Scholar
  8. Lin WM, Chang KC, Liu YM, Chung KM (2012) Field surveys of non-residential solar water heating systems in Taiwan. Energies 5:258–269View ArticleGoogle Scholar
  9. Liu ZH, Hu RL, Lu L, Zhao F, Xiao HS (2013) Thermal performance of an open thermosyphon using nanofluid for evacuated tubular high temperature air solar collector. Energy Convers Manag 73:135–143View ArticleGoogle Scholar
  10. Liu Z, Li H, Zhang X, Jin G, Cheng K (2015a) Novel method for measuring the heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters based on artificial neural networks and support vector machine. Energies 8:8814–8834View ArticleGoogle Scholar
  11. Liu Z, Liu K, Li H, Zhang X, Jin G, Cheng K (2015b) Artificial neural networks-based software for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters. PLoS ONE 10(12):e0143624. doi:https://doi.org/10.1371/journal.pone.0143624 View ArticleGoogle Scholar
  12. Mekhile S, Saidur R, Safari A (2011) A review on solar energy use in industries. Renew Sustain Energy Rev 15:1777–1790View ArticleGoogle Scholar
  13. Morrison GL, Tran NH, McKenzie DR, Onley IC, Harding GL, Collins RE (1984) Long term performance of evacuated tubular solar water heaters in Sydney, Australia. Sol Energy 32:785–791View ArticleGoogle Scholar
  14. Pei G, Li G, Zhou X, Ji J, Su Y (2012) Comparative experimental analysis of the thermal performance of evacuated tube solar water heater systems with and without a mini-compound parabolic concentrating (CPC) reflector (C < 1). Energies 5:911–924View ArticleGoogle Scholar
  15. Shah LJ, Furbo S (2007) Theoretical flow investigations of an all glass evacuated tubular collector. Sol Energy 81:822–888View ArticleGoogle Scholar
  16. ELM source codes: http://www.ntu.edu.sg/home/egbhuang/elm_codes.html

Copyright

© The Author(s). 2016