 Research
 Open Access
 Published:
Quantification of the effects of fur, fur color, and velocity on TimeOfFlight technology in dairy production
SpringerPlus volume 4, Article number: 144 (2015)
Abstract
With increasing herd sizes, camera based monitoring solutions rise in importance. 3D cameras, for example TimeOfFlight (TOF) cameras, measure depth information. These additional information (3D data) could be beneficial for monitoring in dairy production. In previous studies regarding TOF technology, only standing cows were recorded to avoid motion artifacts. Therefore, necessary conditions for a TOF camera application in dairy cows are examined in this study. For this purpose, two cow models with plaster and fur surface, respectively, were recorded at four controlled velocities to quantify the effects of movement, fur color, and fur. Comparison criteria concerning image usability, pixelwise deviation, and precision in coordinate determination were defined. Fur and fur color showed large effects (η ^{2}=0.235 and η ^{2}=0.472, respectively), which became even more considerable when the models were moving. The velocity of recorded animals must therefore be controlled when using TOF cameras. As another main result, body parts which lie in the middle of the cow model’s back can be determined neglecting the effect of velocity or fur. With this in mind, further studies may obtain sound results using TOF technology in dairy production.
Background
Multidisciplinary approaches and technological solutions will be characterizing concepts in the agricultural science of the next decade. Especially solutions to monitor animal health in terms of body condition changes and lameness gain more and more importance, as these are meaningful issues in herd health management and herd productivity (Collard et al. 2000; Booth et al. 2004). There have been several camerabased studies during the last years reaching high rates of correct classification in lameness detection. In (Song et al. 2008; Pluk et al. 2012), and (Poursaberi et al. 2011) walking cows had been recorded using digital 2D cameras in side view position and methods concerning hooves, legs’ angles, and back posture came to use in the evaluation of cows’ gait, respectively. Moreover, various 2Dcamerabased studies on automated body condition scoring have been presented. In (Azzaro et al. 2011) cow shapes were reconstructed using linear and polynomial kernel principal component analysis and the body condition score (BCS) was estimated. BCS prediction models based on five anatomical points were presented in (Bercovich et al. 2012).
Segmentation is always a difficult part of preprocessing when 2D digital images are used (Hertem et al. 2013), because changes in light conditions and scenery affect segmentation algorithms and complicate the definition of a common image background for all pictures. For this reason thermal images were considered for BCS determination in (Halachmi et al. 2013). BCS was assessed by fitting a parabola to the cow shape and full automation was reached. 3D cameras are another approach to overcome segmentation problems. As the pixel’s relative distances from the camera are known, the separation between fore and background is easier. Furthermore, the usage of 2D data forces the projection of a threedimensional scenery onto a plane. Objects and their movement through threedimensional space can only be described accurately when spatial anomalies like distances diagonal or parallel to the camera’s line of sight are considered. Consequently, in (Krukowski 2009) images from a TimeOfFlight (TOF) 3D camera were analyzed with regard to BCS determination. The rear view of dairy cows in standstill was recorded with a manually guided camera. In (Salau et al. 2014) and (Weber et al. 2014) a TOFbased system with automated calibration, animal identification and body trait gathering was introduced. The study was able to estimate the backfat thickness (BFT) using the characteristics extracted from the depth images. A different type of 3D camera was used in (Viazzi et al. 2014) and (Hertem et al. 2014). The Microsoft Kinect sensor (Microsoft 2010) works with the 3D measurement principle “Structured Light” (Fofi et al. 2004). The Kinect was used for lameness detection via back’s posture extraction in (Viazzi et al. 2014) and algorithms and results were compared to those obtained from 2D video recordings as presented in (Poursaberi et al. 2011). As the Kinect camera’s usage turned out to be promising, in (Hertem et al. 2014) the algorithm was improved and the classification performance was optimized.
Digital cameras are prone to error when used outdoors or in barn environment, because of sunlight conditions, dirt, furcovered surfaces, and the animals’ movement. For a successful application of 3D cameras in monitoring solutions, their sensitivity towards fur, different fur colors, and animal movement should be analyzed. During data collection for (Weber et al. 2014) was found that fur and fur color changes cause imprecise TOF depth measurements. In addition, evaluations needed to be restricted to recordings of standing cows, as motion artifacts occurred. The dependence on the projected infrared pattern causes some limitations of “Structured Light” depth measurement for its part. Depth values can only be calculated from constellations of light dots not from a single dot, which causes difficulties in measuring thin objects (Lau 2013). Additionally, no depth value can be calculated between the light dots, which leads to a coarser depth resolution with increasing distance from the camera (Andersen et al. 2012). Furthermore, (Hansard et al. 2012) stated, that material properties strongly correlated to depth accuracy, and that both measurement principles had difficulties with various surfaces. This study would not compare the capabilities of Kinect and TOF depth sensors, because there have been detailed publications on this (i.e. (Langmann et al. 2012) where a TOF camera with a sensor similar to that used in SR4K was studied). The next generation (Microsoft 2014) of the Microsoft Kinect depth sensor is indeed a TOF camera. It has not been available for data collection during this study. Therefore, the present study quantified quality loss due to fur (color) and movement concerning TOF camera recordings. Indoor recordings of cow models were used to eliminate the effect of sunlight, and the software described in (Salau et al. 2014) was applied to them. The aim was to create a basis for a TOF camera application in moving dairy cows.
Results
All the criteria showed the same differences and significant effects, independently of whether they were extracted from the original or the mirrored images (for explanations on the mirroring see section ‘Material and methods’, ‘Comparison criteria and statistical methods’). Only the data extracted from the original images is presented.
Proportion of high quality images
As the ratio of high quality images (N _{velocitiy}) to recorded images (C _{velocitiy}), the HQIratio (section ‘Material and methods’, ‘Comparison criteria and statistical methods’, Proportion of high quality images) served as a measure for the usability of the recorded images:
Table 1 presents the numbers of recordings, the numbers of images that passed the quality tests, and the HQIratios for both models and all velocities.
For the plaster cast, the most significant decrease happened during the transition from standstill to movement, where the HQIratio dropped by ≈22% from 0.87 to 0.68. With the acceleration from 10 cm/s to 20 cm/s HQIratio dropped from 0.68 to 0.66, which was a decrease of ≈3%. In comparison with the final velocity of 30 cm/s, HQIratio fell by additional ≈6% to 0.62. The HQIratios of the furcovered model dropped by 66% when the model started to move. The acceleration afterwards caused additional decreases in HQIratio by ≈82% (from 0.34 to 0.06), when speeding up from 10 cm/s to 20 cm/s, and 75% (from 0.06 to 0.015), when the final velocity of 30 cm/s was set. The polynomials of degree 2 (P _{plaster}, P _{fur}) and the Gaussian exponential functions (g _{plaster}, g _{fur}) that fit the vectors (HQIratio_{0}, HQIratio_{10}, HQIratio_{20}, HQIratio_{30}) best in a least square sense are given by
and
for the plaster cast and the furcovered model, respectively. All fits had a single degree of freedom, the other goodnessoffit statistics are stated in brackets behind the approximating functions. The Gaussian exponential approximation of the plaster cast’s HQIratio (Equation 4) show considerably inferior goodnessoffit statistics compared to all other approximations. Its R ^{2} value of 0.83 and RMSD =0.078 face R ^{2} values of 0.99 and RMSD ≤0.0432. Both approximations of the furcovered model’s HQIratios were suitable referring to the goodnessoffit statistics, but the Gaussian exponential fit comes with three times smaller rootmeansquaredeviation. The polynomial fit (dotted purple line in Figure 1) shows a local minimum between the original values belonging to 20 cm/s and 30 cm/s. In Figure 1 all approximations are displayed. Hereby in both models the inferior one is illustrated as dotted line.
Pixelwise differences in standstill
For both cow models the fluctuation criteria SumDiff
and the pixelwise calculated standard deviation pwStd (section ‘Material and methods’, ‘Comparison criteria and statistical methods’, Pixelwise differences in standstill) showed significant differences in medians between pixel belonging to “Interior” or “Boundary” (Table 2). The effect sizes for the criterion pwStd exceeded the effect sizes for SumDiff. According to (Cohen 1988), effect sizes in SumDiff were small (η ^{2}=0.013 for plaster cast and η ^{2}=0.017 for furcovered model), while effect sizes for pwStd were medium (η ^{2}=0.111 for plaster cast and η ^{2}=0.096 for furcovered model). The cow model significantly affected both criteria within both regions. The effect could be considered large within “Interior” (η ^{2}=0.232 with SumDiff, η ^{2}=0.235 with pwStd) but very small within “Boundary” (η ^{2}=0.006). Additionally, the fur color had a significant effect in both criteria (Table 3). Both criteria showed large effect sizes (η ^{2}=0.472).
Precision of coordinate determination
Table 4 shows the development of precision criterion RpV
with increasing velocity for every considered body part and both models (section ‘Material and methods’, ‘Comparison criteria and statistical methods’, Precision of coordinate determination). Additionally, the goodnessoffit statistics of the polynomial approximations of the vectors (RpV _{0},RpV _{10}, RpV _{20}, RpV _{30}) are given. Except from the fit to RpV values belonging to BB30 measured with the furcovered model, the coefficients of determination range from R ^{2}=0.92 to R ^{2}=0.98. The RMSD varies between 0.001 and 0.018 for the plaster cast and between 0.092 and 0.998 for the furcovered model, again except the point BB30. The goodnessoffit statistics for BB30 measured with the furcovered model are noticeable, as RMSD =0.002 and R ^{2}=0.57 are significantly lower than the values within the fits belonging to the furcovered model. Furthermore, the R ^{2} value is significantly lower than the corresponding values of all other approximations. The quadratic coefficients of the RpVapproximations belonging to the plaster cast were close to zero (median_{plaster}= −0.005). For the furcovered model they reached from 0.05 to 1.15 (median_{fur}=0.41). The imprecision criterion RpV grew significantly (p=0.05) faster with increasing velocity when it came to the furcovered model. The size of the model’s effect was very large (η ^{2}=0.846).
Discussion
This study provided four possible measures to quantify the effects of fur in contrast to a homogeneous plaster surface, fur color, and velocity.
Proportion of high quality images (HQIratio)
Both surface materials showed loss in image quality measured via HQIratio due to animal movement. This was to be expected, as TOF cameras are prone to motion artifacts. As explained in section ‘Material and methods’, ‘TimeOfFlight Technology’, the depth values were calculated using four signals S _{1},…,S _{4}. Motion artifacts occur when objects move significantly during the acquisition of S _{1},…,S _{4} (Hansard et al. 2012).
However, the behavior in decrease of image quality differed significantly between the models. In this study, no velocities greater than 30 cm/s were considered. For such speeds, no images of the furcovered model would have passed the quality tests implemented in the software (see section ‘Material and methods’, ‘Software’). Even at 30 cm/s only four usable images remained for that model as can be seen from Table 1. It was not expected that usable images of the model at a cow’s assumed normal walking pace of about 111 cm/s (≈4 km/h) could be acquired. To quantify the differences, approximating functions for the vectors (HQIratio_{0},HQIratio_{10},HQIratio_{20},HQIratio_{30}) were determined. As the velocity was constantly increased by 10 cm/s per step, a quadratic behavior of the acceleration was to be expected. Therefore, approximations of the form α∗x ^{2}+β∗x+γ were calculated. The HQIratio values related to the furcovered model (Table 1, rightmost column) indicated a faster decrease. Therefore, the behavior of the HQIratio_{velocitiy} vectors was in addition approximated by a Gaussian exponential function \(K*exp\left (\frac {(xL)}{M}\right)\), and goodnessoffit statistics of the approximating functions were compared. The minimal value of the polynomial fit to the furcovered model’s HQIratios (Equation 3) was lower than the value for 30 cm/s. The polynomial approximation’s goodnessoffit statistics were, nevertheless, quite as suitable as the ones belonging to the exponential approximation (Equation 5). This fact and the position of the polynomial’s minimum between the third and last original value could be explained by the limited number of HQIratio values, that had been considered. The approximation model had no original value beyond 30 cm/s to predict the decay, but the course could be described very well within the smaller velocities. Looking at the plaster cast’s approximations, the polynomial (Equation 2) is superior to the exponential fit (Equation 4). This was mainly caused by the HQIratio belonging to 10 cm/s, as it is considerably smaller than the HQIratio in standstill, but nearly equal to the HQIratio calculated for 20 cm/s. On the one hand, an unobserved effect during the recording of the plaster cast at velocity 10 cm/s might have caused this outlier. On the other hand, the image quality might show a polynomial instead of an exponential decrease with increasing velocity due to the homogeneity of the plaster cast’s surface. The fact, that the model was moving at all, seemed more meaningful than the actual velocity. A considerable amount of high quality images could be gathered even at the highest considered speed. With the furcovered model on the contrary, every step in acceleration caused a substantial additional loss in image quality. This indicated, that the velocity would have to be kept as low as possible, when moving cows are to be recorded with the SR4K.
To quantify the effect of the surface material in motion, coefficients of approximations of the same type had to be compared between models. The Gaussian exponential approximation for the plaster cast should not be used in this comparison, because its goodnessoffit statistics were clearly inferior. Concerning the first acceleration steps, the degree 2 polynomials were good fits for both models. As the coefficient with the x ^{2} term had the most impact on a polynomial’s growth, the much faster decrease caused by the furcovered surface in contrast to the plaster surface could be quantified by the quotient of the quadratic coefficients of P _{fur} and P _{plaster}. That gives \(\frac {0.1532}{0.0368}\approx 4.16\).
Pixelwise differences in standstill (SumDiff, pwStd)
SumDiff (Equation 6) and pwStd were measures for pixelwise deviation in depth values. As only recordings in standstill had been used for the calculation, these comparison criteria were independent of velocity and allowed to analyze the differences between models, that were caused strictly by surface material.
Mixed phases are produced when infrared light with different phase shifts was observed by one pixel. As an implication, the depth values were calculated from an superposition of multiple reflected signals. Such multipath errors had been expected to be a problem at the cow models’ boundaries, as this error generally increased as the objects surface’s normal deviates from the optical axis of the camera (Hansard et al. 2012). Therefore, the cow area was split up in the regions “Boundary” and “Interior” to reach better comparability. The grouping after regions within models and the grouping after models within regions effected both criteria significantly. Especially the size of the model effect within “Interior” was large (η ^{2}=0.23). This could be interpreted as a quantification of the effect of fur surface on TOF depth measurement precision. Due to the structure of fur, an augmented refraction of light occurred and less accurate TOF depth measurement was to be expected. Pixelwise deviation increased at the edge of the cow model’s area. Within “Boundary” only a small model effect could be observed. A plausible explanation was, that for both models the depth measurement within “Boundary” was already less accurate due to mixed phases, and therefore the surface structure had hardly an impact, whereas the accurate depth measurement within “Interior” was strongly affected by the fur.
It had additionally been distinguished between black and white fur within “Interior” of the furcovered model. The fur color also had a significant effect in both criteria, and the effect sizes were very large (η ^{2}=0.472), probably caused by different absorption coefficients of black and white fur. The absorption coefficient was the quotient of the electromagnetic radiation which a body absorbed and the electromagnetic radiation it was exposed to. It ranged between 0 and 1. The exact absorption coefficients for white and black fur were not determined in this study, but assuming a higher absorption coefficient for black fur was reasonable. For example a surface of the carbon black and white marble had absorption coefficients ≈0.96 and ≈0.46, respectively (Baehr and Stephan 2004). The infrared signal reflected from the black fur had lost more intensity when it returned to the sensor inside the TOF camera than the signal reflected from the white fur (MESAImaging 2013a). Therefore, the depth measurement varied in quality. As the fur needed to be glued to the model to avoid that measurements became unrepeatable due to displacements of the coat, only one coat was tested. The effect sizes might depend on the specific coat texture of this real cow fur. Then again, using different coats might have caused difficulties in distinguishing between the effects of the animal and the fur color.
As pwStd is based on quadratic differences in contrast to the absolute differences used to calculate SumDiff, in pwStd larger differences in depth value gain more weight than in SumDiff. That might explain the smaller differences in medians and the less strong effects on SumDiff than pwStd when it came to a comparison between the models or the regions.
Smoothing the images was not considered in this study, as the effects on the original recordings had been of interest. Specific smoothing could be a possibility in an image processing application to handle the differences between black and white fur, at the risk of losing information about the surface shape.
Precision of coordinate determination (RpV)
RpV (Equation 7) was a measure of imprecision concerning the determination of Xcoordinates. The original software applied to cows in an electronic feeding dispenser had shown 1.5% error rate (Salau et al. 2014) in the detection of ischeal tuberosities, dishes of the rump, and tail. RpV had been calculated for all velocities with regard to the automatically determined Xcoordinates of these five body parts and additionally for BB30. In both cases RpV rose while the models accelerated. However, for the plaster cast only a linear growth could be observed, as the quadratic coefficients were all close to zero, whereas RpV increased quadratically for the furcovered model. Therefore, the fur affected the loss of precision due to velocity very strongly (η ^{2}=0.846). Indeed, the imprecision in coordinate determination in standstill was higher for the plaster cast, than the furcovered model. But differences in the geometrical shape of the models could not have been excluded as reasons for more erroneous coordinate determination concerning the plaster cast. A noticeable fact was, that BB30 showed not only the smallest RpV values for all velocities in both models, but also a large difference in quadratic coefficients compared to the other body parts when it came to the furcovered model. This indicated, that this body part could be determined most accurately and exhibited the least loss of precision due to velocity. A reason for this might be, that of all considered body parts only BB30 lay in “Interior” instead of “Boundary”, where the TOF depth measurement was more reliable. It has to be taken into account, that the coefficient of determination for the approximation with a quadratic polynomial corresponding to BB30 on the furcovered model was inferior compared to the approximations belonging to the other body parts. It could be questioned, if the calculated quadratic coefficient was meaningful.
Discussing TOF usage
In dairy production fur surfaces had to be considered and no influence on the fur color could be taken. It had to be analyzed how the application of TOF technology could lead to dependable results. Dairy cows’ BFT was successfully estimated from TOF recordings in (Weber et al. 2014) with the limitation, that cows were only recorded in standstill. Furthermore, traits were only extracted from one dimensional sections through the recorded cow surfaces and not from two dimensional areas on the surfaces. The reason was, that differences in depth measurement between black and white fur could be corrected in a more controlled way when only one dimension was considered. Principal descriptors for i.e. body condition scoring were located in the tail head area (Ferguson et al. 1994) where the effects of fur and velocity turned out to be strong. Thus, it had to be expected, that assessing BCS or BFT from TOF recordings of moving animals would be erroneous. The restriction of analyzing traits from “Boundary” only from recordings collected during feeding or milking is a serious limitation for a monitoring system. However, the effects of fur and velocity were noticeably smaller in “Interior”, hence the TOF camera might be applicable for the determination of the backbone in moving cows and lameness detection via back posture analysis as in (Hertem et al. 2014). Yet, it was questionable if a TOF camera could be a superior choice for dairy farming application, as the Kinect was cheaper, did not show differences in depth measurement between black and white fur, and produced little motion artifacts. With regard to the latter should be mentioned, that realtime preprocessing methods to compensate motion artifacts in TOF recordings have been introduced (Hoegg et al. 2013). Considering the effect of fur again, Kinect’s and SR4K’s performance on measuring stuffed animals, small furcovered animal models, and other test objects had been examined in (Hansard et al. 2012). It has to be mentioned, that synthetic fur’s structure differs from that of real fur. But with both furcovered test objects the RMSD of depth accuracies between Kinect and SR4K were comparable. The Kinect’s performance over all test objects turned out to be worse than that of SR4K.
The next generation of the Kinect is a TOF camera, but it is equipped with a novel image sensor (Lau 2013). Every pixel is divided in half, the pixel halves are ready to absorb reflected light alternately, and the absorbing time of the first half is aligned with the pulsing of the laser. During the time the first half is rejecting incoming light, the second half is absorbing, and the laser is off. Consequently, the distribution of received photons among both pixel halves changes with the distance between camera and object and is used to calculate depth values. A renunciation of the control signals S _{1},…,S _{4} could limit motion artifacts, because there are less possibilities for the object to move during calculation. If proportions of light were absorbed by the object’s surface and did not return to the sensor, both pixel halves were affected equally, and the distribution was not altered. This would reduce blackwhitedifferences in depth measurement significantly, as they were a consequence of the absorbing coefficients of black, respectively, white fur. The next Kinect has not been available while data collection for this study was carried out, but it promises to be an affordable alternative to both Kinect and the current generation of TOF depth sensors.
Conclusion
This study introduced criteria to quantify the effects of fur and animal movement. The experimental indoor test scenario included two cow models with fur and plaster surface, respectively. According to the criteria concerning pixelwise deviation (SumDiff, pwStd), the effect of the fur surface in contrast to a more homogeneous surface on TOF measurement was large. Additionally, crucial differences related to fur color were observed, as criteria medians were two to three times higher with black than white fur. In any application of TOF cameras, the velocity of the recorded animals needed to be controlled, because in the analysis of moving models, the impact of the fur surface became even more decisive: With increasing velocity the proportion of high quality images (HQIratio) dropped four times faster due to fur, and furthermore, the fur caused quadratic loss of precision in coordinate determination (RpV) in contrast to a linear behavior without fur. The latter was a problem especially at the edge of the cow model’s area, i.e. the tail head region. However, coordinate determination was sound in the middle of the cow’s back and hardly affected by velocity or fur. It was shortly discussed, if TOF depth sensors could compete with the Microsoft Kinect 3D camera when it comes to studies dealing with traits from the cow area’s interior. At this, an outlook on the next Kinect camera generation was given. Its new type of TOF sensor seemed to be a noticeable improvement to both current TOF sensors and Kinect.
Material and methods
TimeOfFlight Technology
The SR4K (Mesa Imaging AG) emits infrared light (modulated signal frequency f = 30 MHz), which is reflected by the object. From four phase control signals S _{1},…,S _{4} with 90 degree phase delays from each other the collection of electrons from the detected reflected infrared signal is determined. Let Q _{1},…,Q _{4} represent the amount of electric charge for S _{1},…,S _{4}, respectively. Using the four phase algorithm, the phase difference t _{ d } is estimated as \(t_{d} = \text {arctan}\frac {Q_{3}  Q_{4}}{Q_{1}  Q_{2}}\). The distance d between object and camera is calculated from the phase shift with the following formula: \(d = \frac {c}{2 f}\frac {t_{d}}{2\pi }\), whereas c and f denote the speed of light and the signal frequency, respectively (Hansard et al. 2012). The camera’s range is 0.8 to 5 m (MESAImaging 2013a). Its accuracy of measurement over this calibrated range is 1 cm (for the 11×11 central pixel). It is recording up to 54 images per second with a resolution of 176×144 pixel depending on the exposure time and has 43.6^{o} horizontal and 34.6^{o} vertical field of view. SR4K provides distance and (x,y,z) coordinate data, amplitudes, confidence maps as an estimate of reliability, and SwissRanger streams (srs) consisting of sequences of images as output according to user’s choice. The camera was used with default settings.
Recorded cow models
Two cow models were recorded with a SR4K TOF camera in September 2012 at the Institute for Agricultural Engineering and Animal Husbandry of Bavarian State Research Center for Agriculture (BSRCfA) in Grub (Germany). Recording (details in ‘Installation and recording’) of both models were taken from top view in standstill and motion.
A model of a cow’s lower back was build at BSRCfA from solid, synthetic material using CNC (computer numerical control) carving (width: 0.5 m, length: 0.5 m, height: 0.150.22 m). It had a tail, ischeal tuberosities and a lower backbone but no hipbones and was not modeled after a real cow (Figure 2, left). A black and white real HolsteinFriesian’s fur was permanently glued to the model. The furcovered model was firmly mounted on a board (0.6×0.6 m ^{2}) with wooden beams (0.05×0.05 m ^{2}, height: 0.25 m) at the corners. Since the fur could not be removed from the model without destroying it, no other furs were used for testing. Originally, this model was intended for another purpose within a study related to body condition determination (Weber et al. 2014; Salau et al. 2014). Hipbones were not included in the model, because they were irrelevant then. The model nevertheless shows most of the points of interest (ischeal tuberosities, dishes of the rump, tail, backbone) that could be determined by the software described in (Salau et al. 2014). Therefore, the model was reused in this context.
Later, since no furless version of the model was available, a plaster cast was taken from a HolsteinFriesian cow’s lower back to obtain a portable model of a real cow’s shape as a negative control for fur (Figure 2, right). This was done at the research farm Karkendamm of ChristianAlbrechtUniversity (CAU) in Kiel (Germany). The lower back was greased with petrolatum to protect fur and skin of the animal. Afterwards this area was uniformly covered with several layers of wet plaster bandages. The covered area included base of the tail, ischeal tuberosities, lower backbone and lower back. The hipbones were not included. The bandages reached approximately 15 cm down the animal’s side. A blowdryer was used to fasten the drying, before the plaster cast (length: 0.56 m, width: 0.55 m, height: 0.160.22 m) was lifted of the cow.
Installation and recording
A metallic frame with two horizontal running rails was build by BSRCfA (length: 3 m, width: 1.04 m, height: 0.8 m). A wooden plate (1.14×1.14 m ^{2}) was placed on the running rails which could be towed by a rope stretched up to an impeller wheel. The motor could be regulated with a control panel that also displayed the velocity. At the end of the running rails the plate was stopped automatically, and the moving direction could be changed by a switch. The construction was supported by a vertical frame (width: 1.27 m, height: 2.13 m) above which’s center line the TOF camera was attached in top view (Figure 3).
Recording took place indoors at BSRCfA to exclude the influences of direct sunlight or insects. In Grub a 2core system having 3.43 gb RAM with the recording software (see section ‘Software’) developed at the Institute of Animal Breeding and Husbandry of CAU was used for recording. SwissRanger streams of both models were recorded in standstill and at the controlled velocities 10 cm/s, 20 cm/s and 30 cm/s. The camera stayed in fixed position with 1.28 m distance between sensor and the wooden plate. Additionally streams of the wooden plate without any model on it were recorded to capture the completely empty scenery.
Software
Originally developed software
At CAU software was developed to record cows in an electronic feeding dispenser and automatically extract body traits (Salau et al. 2014). The software firstly calculated scenery information out of a number of images of the completely empty scenery. It then could decide automatically if an image showed a cow’s lower back. These images were segmented and stored for further processing, all others were deleted. Subsequently, the body parts ischeal tuberosities, base of the tail, dishes of the rump, hipbones, and backbone were determined automatically. The software tested the segmentation results and the coordinates of body parts directly after their calculation (for details see (Salau et al. 2014)). Images failing any test were deleted.
Necessary software modifications made in this study
The streams recorded from the cow models were used as virtual camera and analyzed with this software. As the models differed from real cows, some slight modifications to the software were necessary:

1.
The basis for the automated decision that the image showed a cow’s lower back had been, that the area covered with the cow’s body exceeded the lower image border. As the models did not reach the image border, an additional rectangle (Figure 4, middle) had been temporarily added to every image to close the gap and to avoid, that all images were deleted. After successful segmentation the rectangle was removed again (Figure 4, right).

2.
The cow was separated from the background by subtracting the averaged empty scenery and using the height differences between the cow and the floor. Both models reached a maximum height of 0.22 m above the wooden plate (This served as the floor in the present scenario.), thus the tolerances had to be adapted. In case of the furcovered model the position of the wooden beams (section ‘Material and methods’, ‘Recorded cow models’, Figure 4, left) relative to the model had once to be specified manually and their removal had to be added to the automatic segmentation.

3.
As hipbones were not included in both models, their automatic determination had to be removed from the software. Instead, the point on the backbone in 30 pixel radius from the tail (BB30) was determined, in order to include a measurement point in the comparison, that was not positioned at the edge of the cow area (Figure 4, right).
Comparison criteria and statistical methods
MATLAB presents the images as matrices with 176 rows and 144 columns. Counting rows and columns starts at the left upper corner. The vertical midline runs between columns 72 and 73. The algorithms of the software developed in (Salau et al. 2014) work the images rowwise and columnwise from the left upper corner. To exclude this running direction as reason for leftrightdifferences in the extracted comparison criteria, the analysis had been repeated with all images mirrored on the vertical line between column 72 and 73.
Proportion of high quality images
As the camera stayed in a fixed position during recording the number of images showing the cow model decreased with increasing velocity (Table 1). These numbers will be called C _{0} belonging to the recording in standstill and C _{10}, C _{20}, and C _{30} belonging to the recordings at 10 cm/s, 20 cm/s, and 30 cm/s, respectively. Both models were recorded for four minutes in standstill. For all velocities both cows model were recorded passing the camera five times. As explained in section ‘Software’, Originally Developed Software various quality tests had been integrated in the software and all images failing any of these tests were deleted. The numbers of output images after applying the quality tests will be called N _{0},N _{10},N _{20}, and N _{30}. The quotient
(compare Equation 1) is the ratio of High Quality Images in relation to recorded images. For both models the behavior of HQIratio_{velocitiy} with increasing velocity was analyzed by approximating the vector (HQIratio_{0},HQIratio_{10},HQIratio_{20},HQIratio_{30}) with a quadratic polynomial α∗x ^{2}+β∗x+γ on the one hand and with a Gaussian exponential function \(K*exp\left (\frac {(xL)}{M}\right)\) on the other hand. For every approximation the rootmeansquaredeviation RMSD, coefficient of determination R ^{2}, and degrees of freedom were calculated as goodnessoffit statistics. The RMSD in general is the sample standard deviation between the actually observed values y _{ t } and the values \(\widehat {y}_{t}\) calculated by the approximation \(\text {RMSD}=\sqrt {\frac {1}{n}*\sum _{t=1}^{n} \left (y_{t}\widehat {y}_{t}\right)^{2}}\), where n denotes the sample size. For all approximations the MATLAB Curve Fitting Toolbox (The MathWorks Inc 2014a) was used.
Pixelwise differences in standstill
Considering only the images in standstill the pixelwise deviation in depth values was analyzed for both models. For this purpose the pixelwise absolute differences of every two consecutive images were taken, summed up, and divided by the number of summands (N _{0} as defined in section ‘Material and methods’, ‘Comparison criteria and statistical methods’, Proportion of high quality images):
(compare Equation 6). This resulted in a matrix containing the values of the criterion SumDiff for every pixel. Additionally for every pixel the standard deviation (pwStd) in depth values was calculated. While pwStd used the quadratic aberration around the mean depth value, in SumDiff the variation from image to image was taken neglecting the pixelwise depth values’ mean.
Each image could be split up in foreground (covered by cow model) and background (set to zero). Abrupt changes in the distance between recorded object and camera led to more possible ways for the infrared light to be reflected and return to the sensor and to less accurate depth measurement. The problem of erratic depth values along steep edges is a well known problem of TOFcameras; compare (Langmann et al. 2012). The pixelwise deviation in depth value was thus expected to be larger for pixel at the edge of the cow area. Visual inspection of depth maps revealed, that the main reflections or peaks in depth measurement occurred in an only one to two pixel wide area between background and cow area. Therefore, the foreground was split up in the disjoint areas “Boundary” and “Interior” (Figure 5). If a pixel’s neighborhood of radius one intersected with both the foreground and background, the pixel was considered “Boundary” (425 pixel furcovered model, 505 pixel plaster cast). If the neighborhood was fully included in the foreground, the pixel was considered “Interior” (7920 pixel furcovered model, 10903 pixel plaster cast). All images where tested once using this definition of “Boundary”. The effect of different boundaries was not tested. In the analysis of the furcovered model, “Interior” was additionally partitioned in the disjoint areas “Interior Black” (6362 pixel) and “Interior White” (1558 pixel). A gray scale image of the amplitudes’ map was used to distinguish between black or white fur (Figure 6). All pixel with a gray scale value ≥25 were considered to belong to the white spot.
The Wilcoxon ranksum test is a nonparametric version of the classical ttest. It compares the medians of the sample groups by examining the ranks of the data’s scores within both groups’ observations. The values of SumDiff and pwStd were considerably skewed and thus not normally distributed. Additionally, the grouping in “Boundary” or “Interior” (likewise “Interior Black” and “Interior White”) naturally led to unequal group sizes. Therefore, for both models Wilcoxon ranksum tests instead of ttests were performed to examine if the pixel’s position in “Boundary” or “Interior” had significant effect. Furthermore, all SumDiff and pwStd data collected from the regions “Interior” and “Boundary” was grouped after models, respectively, and Wilcoxon ranksum tests were performed to look into the effect of the cow model on pixelwise deviation. Concerning the furcovered model additional Wilcoxon ranksum tests (level of significance p =0.02) were performed to analyze the effect of the fur color. All group medians were calculated. In case significance was given, the ranked data was used to calculate the effect size \(\eta ^{2} = \frac {SS\,due\,to\,grouping\,variable}{total\,sum\,of\,squares\,(SS)}\), which is the proportion of variance in the data explained by the grouping. For all statistical calculations the MATLAB Statistic Toolbox (The MathWorks 2014b) was used.
Precision of coordinate determination
The software automatically detected six points: ischeal tuberosities, dishes of the rump, tail, and the point on the backbone in 30 pixel radius from the tail (BB30, Figure 4, right). It was analyzed how strongly velocity effects the precision of coordinate determination. The Xcoordinates were detected automatically in standstill and motion. Within each velocity their deviation was measured using the criterion RpV defined in Equation 7, and those RpVvalues were compared between velocities. Due to the deviation in depth values the Xcoordinates of body parts naturally were subject to 1 to 2 pixel fluctuation. This was analyzed with the criteria SumDiff and pwStd. As in this analysis solely fluctuation due to errors in the automatic determination of body parts was to be considered, in this comparison not the standard deviation in Xcoordinates was used as criterion. Instead, the quotient of the Xcoordinates’ range divided by the number of values (RpV: Range per number of Values) was taken as a measure of imprecision:
(compare Equation 7) with velocities 0,10,20,30 cm/s. Therefore, for each of the six considered body parts a vector (RpV _{0}, RpV _{10}, RpV _{20}, RpV _{30}) was calculated using the Xcoordinates extracted from both models, respectively. These vectors were approximated with quadratic polynomials α∗x ^{2}+β∗x+γ using the MATLAB Curve Fitting Toolbox (The MathWorks Inc 2014a) and rootmeansquaredeviations RMSD, degrees of freedom, and coefficients of determination R ^{2} were calculated as goodnessoffit statistics. The quadratic coefficients α describe the polynomials’ growth behavior. To examine the cow model’s effect on the growth of imprecision a Wilcoxon ranksum test was performed on the quadratic coefficients. The medians belonging to each cow model were calculated, and the effect size η ^{2} was determined.
Except from the recordings in standstill, the models were moving vertically through the camera’s field of view. This implies, that Ycoordinates changed from image to image. This led to several sources of imprecision concerning the analysis of Ycoordinates. Therefore, it is excluded from the main article and presented in the Additional file 1.
Declaration of adherence to ethical guidelines
The authors declare that the plaster cast was taken strictly following international animal welfare guidelines. The institutions the authors are affiliated with do not have research ethics committees or review boards. The cast was taken in a completely noninvasive manner. The cow was not forced into an unnatural body posture and was fastened for no longer than one hour. Feed was provided during the procedure. No corrosive, burning, unpleasant, extremely hot or cold substances were used.
Abbreviations
 TOF:

TimeOfFlight, a principle of depth measurement (see (Hansard et al. 2012))
 η ^{2} :

Measure for the size of the effect of grouping data after a certain criterion, calculated from analysis of variance’s
 SS:

\(\eta ^{2} = \frac {\mathrm {SS\,\,due\,\,to\,\,grouping}}{\mathrm {total\,\,SS}}\)
 SS :

Sum of squares, \(SS=\sum _{t=1}^{n} \left (y_{t}\widehat {y}_{t}\right)^{2}\), y _{ t } observed values, \(\widehat {y}_{t}\) estimated values, n sample size
 BCS:

body condition score
 BFT:

Back fat thickness
 SR4K:

Swiss Ranger 4000, TOF camera produced by Mesa Imaging AG (MESAImaging 2013b)
 HQIratio:

The quotient of the number of images that passed the quality test by the number of images showing the cow model: HQIratio\(_{\text {velocity}}=\frac {N_{\text {velocity}}}{C_{\text {velocity}}}\)
 P _{plaster}, P _{fur}, g _{plaster}, g _{fur} :

Approximating polynomials and Gaussian exponential functions for HQIratio (Equations 2, 3, 4, and 5)
 RMSD:

Rootmeansquaredeviation, \(\text {RMSD}=\sqrt {\frac {1}{n}*\sum _{t=1}^{n} \left (y_{t}\widehat {y}_{t}\right)^{2}}\), y _{ t } observed values, \(\widehat {y}_{t}\) estimated values, n sample size
 R ^{2} :

Coefficient of determination
 SumDiff:

Sum of pixelwise absolute differences of every two consecutive images, divided by the number of summands (Equation 6)
 pwStd:

pixelwise standard deviation
 RpV:

Quotient of the Xcoordinates’ range divided by the number of values (Equation 7)
 BB30:

Point on the cow models’
 backbones in a radius of 30 pixel from the tail; t _{ d } :

Phase delay
 S _{1},…,S _{4} :

Phase control infrared signals emitted by SR4K to estimate phase delay t _{ d }
 MHz:

Megahertz
 Q _{1},…,Q _{4} :

Amount of electrical charge for S _{1},…
 ,S _{4}, respectively; d :

Distance between object and TOF camera
 c :

Speed of light
 f :

Modulated signal frequency used by SR4K
 srs:

swiss ranger stream, data format generated by SR4K
 BSRCfA:

Bavarian State Research Center for Agriculture in Grub (Bayerische Landesanstalt für Landwirtschaft 2015) (Germany)
 CNC:

Computerized numerical control
 CAU:

ChristianAlbrechtsUniversity Kiel (ChristianAlbrechtsUniversität zu Kiel 2015) (Germany)
 C _{0}, C _{10}, C _{20}, C _{30} :

Number of images showing the cow model recorded at 0,10,20,30 cm/s
 N _{0}, N _{10}, N _{20}, N _{30} :

Numbers of images recorded at 0,10,20,30 cm/s that passed the quality tests
 α∗x ^{2}+β∗x+γ :

(approximating) quadratic polynomial
 \(K*exp\left (\frac {(xL)}{M}\right)\) :

(approximating) Gaussian exponential function
References
Andersen, M, Jensen T, Lisouski P, Mortensen A, Hansen M, Gregersen T, Ahrent P (2012) Kinect depth sensor evaluation for computer vision applications. Tech. Rep. Technical Report ECETR6, Department of Engineering, Aarhus University, Denmark.
Azzaro, G, Caccamo M, Ferguson J, Battiato S, Farinella G, Guarnera G, Puglisi G, Petriglieri R, Licitra G (2011) Objective estimation of body condition score by modeling cow body shape from digital images. J Dairy Sci 94(4): 2126–2137. http://www.sciencedirect.com/science/article/pii/S0022030211001846.
Baehr, HD, Stephan K (2004) Wärme und Stoffübertragung. 4th edn. Springer, Berlin.
Bayerische Landesanstalt für Landwirtschaft (2015). www.lfl.bayern.de.
Bercovich, A, Edan Y, Alcahantis V, Moallem U, Parmet Y, Honig H, Maltz E, Antler A, Halachmi I (2012) Automatic cow’s body condition scoring. http://www2.atbpotsdam.de/cigrimageanalysis/images/images12/tabla_137_C0565.pdf.
Booth, C, Warnick L, Gröhn Y, Maizon D, Guard C, Janssen D (2004) Effect of Lameness on Culling in Dairy Cows. J Dairy Sci 87(12): 4115–4122. http://www.sciencedirect.com/science/article/pii/S0022030204735547.
ChristianAlbrechtsUniversität zu Kiel (2015). http://www.unikiel.de.
Cohen, J (1988) Statistical power analysis for the behavioral sciences. 2nd ed. Lawrence Erlbaum Associates, Hillsdale.
Collard, B, Boettcher P, Dekkers J, Petitclerc D, Schaeffer L (2000) Relationships between energy balance and health traits of dairy cattle in early lactation. J Dairy Sci 83(11): 2683–2690. http://www.sciencedirect.com/science/article/pii/S0022030200751629.
Ferguson, JD, Galligan DT, Thomsen N (1994) Principal descriptors of body condition score in Holstein Cows. J Dairy Sci 77(http://www.sciencedirect.com/science/article/pii/S002203029477212X): 2695–2703.
Fofi, D, Sliwa T, Voisin Y (2004) A comparative survey of invisible structured light In: SPIE electronic imaging–machine vision applications in industrial inspection XII, 90–97, San Jose, USA. http://fofi.pagespersoorange.fr/Downloads/Fofi_EI2004.pdf.
Halachmi, I, Klopcic M, Polak P, Roberts D, Bewley J (2013) Automatic assessment of dairy cattle body condition score using thermal imaging. Comput Electron Agr 99(0): 35–40. http://www.sciencedirect.com/science/article/pii/S0168169913001907.
Hansard, M, Lee S, Choi O, Horaud R (2012) TimeofFlight Cameras– Principles, Methods and Applications. Springer, London.
Hertem, TV, Alchanatis V, Antler A, Maltz E, Halachmi I, SchlageterTello A, Lokhorst C, Viazzi S, Romanini C, Pluk A, Bahr C, Berckmans D (2013) Comparison of segmentation algorithms for cow contour extraction from natural barn background in side view images. Comput Electron Agr 91(0): 65–74. http://www.sciencedirect.com/science/article/pii/S016816991200275X.
Hertem, TV, Viazzi S, Steensels M, Maltz E, Antler A, Alchanatis V, SchlageterTello AA, Lokhorst K, Romanini EC, Bahr C, Berckmans D, Halachmi I (2014) Automatic lameness detection based on consecutive 3Dvideo recordings. Biosyst Eng 119(0): 108–116. http://www.sciencedirect.com/science/article/pii/S1537511014000142.
Hoegg, T, Lefloch D, Kolb A (2013) RealTime motion artifact compensation for PMDToF images In: TimeofFlight and depth imaging. Sensors, algorithms, and applications, Dagstuhl 2012 seminar on TimeofFlight imaging and GCPR 2013 workshop on imaging new modalities, 273–288.. Springer, Berlin, Heidelberg.
Krukowski, M (2009) Automatic determination of body condition score of dairy cows from 3D images. Master’s thesis, KTH Computer Science and Communication, Stockholm.
Langmann, B, Hartmann K, Loffeld O (2012) Depth camera technology comparison and performance evaluation In: Proceedings of the 1st international conference on pattern recognition applications and methods, 438–444.. SciTePress. doi:10.5220/0003778304380444.
Lau, D (2013) The Science Behind Kinects or Kinect 1.0 versus 2.0.http://www.gamasutra.com/blogs/DanielLau/20131127/205820/The_Science_Behind_Kinects_or_Kinect_10_versus_20.php. Accessed 22 Aug 2014.
MESAImaging (2013a) SR4000 User Manual, version 2.0. http://www.mesaimaging.ch/prodview4k.php. [Download: 18th of March].
MESAImaging (2013b). www.mesaimaging.ch.
Microsoft (2010) PrimeSense Supplies 3DSensing Technology to “Project Natal” for Xbox 360. www.microsoft.com/enus/news/press/2010/mar10/0331primesensepr.aspx. accessed: 2nd of June 2014.
Microsoft (2014) Kinect for Windows. http://www.microsoft.com/enus/kinectforwindows.
Pluk, A, Bahr C, Poursaberi A, Maertens W, van Nuffel A, Berckmans D (2012) Automatic measurement of touch and release angles of the fetlock joint for lameness detection in dairy cattle using vision techniques. J Dairy Sci 95(4): 1738–1748. http://www.sciencedirect.com/science/article/pii/S0022030212001397.
Poursaberi, A, Pluk A, Bahr C, Berckmanns D, Veermaeand I, Kokin E, Pokalainen V (2011) Online lameness detection in dairy cattle using Body Movement Pattern (BMP) In: Intelligent Systems Design and Applications (ISDA) 2011 11th International Conference.. IEEE. doi:10.1109/ISDA.2011.6121743.
Salau, J, Haas J, Junge W, Bauer U, Harms J, Bieletzki S (2014) Feasibility of automated body trait determination using the SR4K TimeOfFlight Camera in Cow Barns. Springer Plus 3(225). Http://www.springerplus.com/content/3/1/225.
Song, X, Leroy T, Vranken E, Maertens W, Sonck B, Berckmans D (2008) Automatic detection of lameness in dairy cattle – Visionbased trackway analysis in cow’s locomotion. Comput Electron Agr 64: 39–44. http://www.sciencedirect.com/science/article/pii/S0168169908001440. [Smart Sensors in precision livestock farming].
The MathWorks Inc (2014a) Curve Fitting Toolbox Users Guide, MATLAB. http://www.mathworks.com/help/pdf_doc/curvefit/curvefit.pdf. http://www.mathworks.de/de/help/curvefit/functionlist.html.
The MathWorks, Inc (2014b) Statistics Toolbox User’s Guide, MATLAB. http://www.mathworks.com/help/pdf_doc/stats/stats.pdf. http://www.mathworks.de/de/help/stats/index.html.
Viazzi, S, Bahr C, Hertem TV, SchlageterTello A, Romanini C, Halachmi I, Lokhorst C, Berckmans D (2014) Comparison of a threedimensional and twodimensional camera system for automated measurement of back posture in dairy cows. Comput Electron Agr 100(0): 139–147. http://www.sciencedirect.com/science/article/pii/S0168169913002755.
Weber, A, Salau J, Haas J, Junge W, Bauer U, Harms J, Suhr O, Schönrock K (2014) RothfußH, Bieletzki S, Thaller G. Estimation of backfat thickness using extracted traits from an automatic 3D optical system in lactating HolsteinFriesian Cows. Livest Sci 165(0): 129–137. http://www.sciencedirect.com/science/article/pii/S1871141314001747.
Acknowledgements
We gratefully acknowledge the Federal Office for Agriculture and Food and the Federal Ministry of Food, Agriculture and Consumer Protection for financial support of the project “Entwicklung und Bewertung eines automatischen optischen Sensorsystems zur Körperkonditionsüberwachung bei Milchkühen”. Furthermore, gratitude is expressed to our cooperation partners the Institute for Agricultural Engineering and Animal Husbandry and the Institute for Animal Nutrition and Feed Management of the Bavarian State Research Center for Agriculture and GEA Farm Technologies.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
JS developed and implemented all image processing procedures and performed the statistical analysis. UB recorded the cow models and delivered feedback during software development in terms of test recordings, statistical quality tests and interim analyses. JHH wrote all parts of the software handling the camera setting, acquisition and saving of data, and automated recording. WJ calculated pixelwise coefficients of determination concerning the camera’s depth values in advance and provided HF cows, and supported the software development with statistical quality tests. JH developed the recording setting in Grub, initiated and supervised the construction of the furcovered model and the framework. GT revised the manuscript and provided valuable food for thoughts. All authors read and approved the final manuscript.
Additional file
Additional file 1
Precision of automatically determined Ycoordinates at different velocities. Supplementary Material to “Quantification of the effects of fur, fur color, and velocity on TimeOfFlight technology in dairy production”, provided as pdf.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Salau, J., Bauer, U., Haas, J.H. et al. Quantification of the effects of fur, fur color, and velocity on TimeOfFlight technology in dairy production. SpringerPlus 4, 144 (2015). https://doi.org/10.1186/s4006401509030
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s4006401509030
Keywords
 Dairy cow
 Automated monitoring
 Timeofflight
 Image processing
 Fur color