Robust watermark technique using masking and Hermite transform

The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.

identification of the author of multimedia material, as well as content authentication. Hence, watermarks must contain necessary information that will aid determining digital image integrity. In this case, the watermark should be fragile and invisible, since any modification to the watermarked image should alter the mark. Another application area is concerned with controlling copies, thus avoiding the illegal distribution of copyrighted material. Finally, it can be used to verify the radio and/or TV broadcasts by inserting watermarks in commercial advertisements.
For over a decade, different watermarking techniques have been proposed with the purpose of providing both robustness and reliability. Within these, there are two main classifications: those pertaining to the spatial domain, and those that work in the transform domain. The first ones directly modify the marked pixels so as to insert the watermark. These are simple and use low computational complexity methods in comparison to the second type. They constitute mainly unsafe methods, because the image suffers visible alterations, therefore emphasizing the modified pixels, while degrading the original image quality. Plus, they do not possess robustness against geometric transformations, but only to specific filtering or JPEG compressing types (Lee and Chen 2000;Van Schyndel et al. 1994;Nikolaidis and Pitas 1996;Kimpan et al. 2004;Voyatzis and Pitas 1996;Chang and Hsiao 2002).
Aiming to avoid these issues, the transform domain techniques were developed. The most frequently used are: discrete cosine transform (DCT) (Cox and Kilian 1996;Lin and Chen 2000;Kung et al. 2002;Zhou et al. 2006), the discrete wavelet transform (DWT) (Dugad et al. 1998;Dawei and Wenbo 2004;Dehghan and Safavi 2010;Chang et al. 2010), and the contourlet transform (Candés et al. 2005;Jayalakshmi et al. 2006). These hinder the watermark elimination or modification, since it is inserted in specific elements that guarantee more robustness. We can also find techniques that take into account for the concealment of the watermark, the features of the human vision system (HVS) (Wolfgang et al. 1999).
The use of this type of techniques has increased because of the good results shown against intentional and unintentional attacks. For instance, Barni et al. (2001) recommends the masking the watermark by taking into consideration the reduced sensibility of the human eye in detecting noise on the edges, the high and low luminance and brightness, as well as the image's texturized regions. This study reported satisfactory results against JPEG's compressions and cropping attacks. Baazis proposal (2005)-based on the ideas of Barni et al. (2001)-use contourlet transform instead of (DCT). Aiming to endure more geometric attacks, we count with techniques that employ the normalized method for the marking of the image (Dong et al. 2005;Baaziz et al. 2008;Cedillo et al. 2008), hence making it invariant to affine transformations. The described method in Dong et al. (2005) disperses a watermark by using the DCT, while applying the normalization process, so as to gain robustness against various attacks. The concealment of the watermark is achieved by using a binary mask, which then is normalized according to the normalization parameters of the original image. The published findings show that it presents robustness against different attacks, such as scaling, rotation, shearing (in both x and y) direction, median filter, and JPEG compression. The length of the watermark is 50 bits (pseudo-random sequence). Another work that uses the normalization process described in Cedillo et al. (2008) has very similar ideas as that of Dong et al. (2005); the difference resides in the classification of blocks in the DCT domain: it employs texture features in order to obtain the value of the strength control parameter to insert the watermark. Despite the indication that it constitutes a robust method, the result is a BER of 0.04 without fighting any attack.
Finally, we can mention newly developed techniques (Tian et al. 2010;Sridevi and Kumar 2011). In Tian et al. (2010) proposal, the Radon transform is used hoping to correct the image's orientation. Dong et al. (2005) ideas are also applied to disperse the watermark. The great disadvantage of this method rests on the fact that the obtained PSNR values are very low (30 dB) for a 50 bits long watermark, even when it shows good results against attacks such as rotation, scale, JPEG compression, and median filter (BER = 0 average for each on four different images). Sridevi et al. (2011) proposed a watermarking method based on normalization, utilizing the DCT and the DWT, so as to obtain a more robust method, and that can hold an even bigger watermark. The PSNR results are too low, which proves that the embedded image quality suffered. The proposal contained in this paper uses the Hermite transform (HT), the spread spectrum method to insert the watermark, and a brightness model-feature that distinguishes it from the process described in Dong et al. (2005)-for the masking of the watermark. Its extraction form is a blind method, because it does not require the original image. Also, the possible quantity of bits that can be modified in the watermark extraction is indicated so as to obtain an undisturbed readable image. This paper is divided as follows: "Hermite transform (HT)" section encloses the theory regarding the Hermite transform; "Watermarking algorithm description" section details the proposed algorithm by using a binary mask (values [0, 1]), as well as a perceptive mask that indicates the watermark extraction process; "Test and results" section describes the tests and results obtained after using common processing and geometric attacks against various images: it also evaluates the advantage of applying a perceptive mask and a minimum BER value, to indicate that the extracted watermark remains readable. The last section holds the conclusions.

Hermite transform (HT)
The Hermite transform (Martens 1990a, b) is a special case of polynomial transform, which is a technique of signal decomposition. The original signal X(x, y), where (x, y) are the coordinates of the pixels, can be located by multiplying the window function V (x − p, y − q), by the positions p, q that conform the sampling lattice S, Eq. 1: The periodic weighting function is then defined as Eq. 2: The unique condition that allows the polynomial transform to exist is that the weighting function must be different from zero for all coordinates (x, y).
The local information within every analysis window will then be expanded in terms of an orthogonal polynomial set. The polynomials G m,n−m (x, y), used to approximate the windowed information are determined by the analysis window function and satisfy the orthogonal condition, Eq. 3: for n, i = 0, 1, . . . , ∞; m = 0, . . . , n and j = 0, . . . , i.
The polynomial coefficients X m,n−m (p, q) are calculated by convolving the original image X(x, y) with the filter function D m,n−m (x, y) = G m,n−m (−x, −y)V 2 (−x, −y) followed by a sub-sampling in the positions (p, q) of the sampling lattice S: (Eq. 4) The orthogonal polynomials associated with V 2 (x) are known as Hermite polynomials: (Eq. 5) where H n (x) denotes the Hermite polynomial of order n.
In the case of the Hermite transform, it is possible to demonstrate that the filter functions D m,n−m (x, y) correspond to Gaussian derivatives of order m in x and n − m in y, in agreement with the Gaussian derivative model of early vision (Young 1985). Moreover, the window function resembles the receptive field profiles of human vision, Eq. 6: Besides constituting a good model for the overlapped receptive fields found in physiological experiments, the choice of a Gaussian window can be justified because they minimize the uncertainty principle in the spatial and frequency domains. The recovery process of the original image consists in interpolating the transform coefficients through the proper syntheses filters. This process is known as inverse polynomial transform, and is defined by Eq. 7: The synthesis filters P m,n−m (x, y) of order m in x, and n − m in y, are defined by Eq. 8: for m = 0, . . . , n and n = 0, . . . , ∞.
In a discrete implementation, the Gaussian window function may be approximated by the binomial window function Eq. 9: . Analysing the case where M is even, we have that filter functions and pattern functions can be centered at the origin moving the window M 2 points. Thus the filter function are Eq. 12: with x = −(M/2), . . . , (M/2). These functions can be expressed Eq. 13: Calculating Z transform of this filter function, Eq. 14: with n = 0, . . . , M. These filters have advantage that they can be performed applying successively a number of simplest filters Hermite coefficients are arranged as a set of N × N equal-sized subbands; one coarse subband X 0,0 representing a Gaussian-weighted image average and detail subbands X n,m corresponding to higher-order Hermite coefficients, as shown in Fig. 1.

Watermarking algorithm description
The proposed algorithm uses a normalization method based on invariant moments (Hu 1962) in order to prevent alterations in the marked image. It also employs a perceptive mask founded on a brightness model. The watermark is dispersed through a spread Cox et al. 2000). Each one of these features is described in the following subsections.

Image normalization
The normalization process of an image X(x, y), with MxN dimensions is conformed by: 1. Translation 2. Shearing (both x and y)

Scale
This transformation is performed to gain robustness in the watermark scheme against geometric transformations. The normalization stage happens in the mask, whether it is perceptive or binary, which allows the watermark concealment, thus preventing the visualization of changes in the embedded image.

Perceptive mask
It is almost impossible to detect a watermark inserted in the frequency domain, whose energy is sufficiently low. However, it is possible to increase the energy of particular frequencies by taking advantage of the human vision system (HVS) masking phenomenon. The perceptual masking refers to the fact that information in certain areas of an image is obstructed by more prominent perceptual information in another part of the scene. To improve the robustness of the proposed watermark scheme, we suggest using a perceptive mask during the insertion process instead of that of the original binary mask (Dong et al. 2005). The mentioned perceptive mask is based on the model put forward by Watson (1993). In constructing it, we took into account a luminance-brightness mapping, founded on the model presented by Schouten (1993), who states that the brightness representation remains invariant to the properties of the luminous source, and to the observation conditions. Schouten divides the algorithm for the luminance-brightness mapping in three stages: 1. Multi-scale Representation: This operation is accomplished by the distribution of luminance L(x). A scaled signal h A (x, s) represents the variations of the reduced luminance with respect to an average level, which is in fact a contrast measure. This operation is carried out in different resolution scales.
To obtain a scale signal h A ( x, s) from a luminance distribution L( x) we employ a receptive fields set of different sizes. s is the scale and x represents position (is a two dimensions vector (x, y)). Scale signal is a result of interaction between central and peripheral mechanics of receptive field, (Eq. 15) And its function is (Eq. 16) where α can be determined by (Eq. 17): where β and δ are constants.

Scale signals: It consists of transforming the signal h A ( x, s) into an assembled map
A( x) linearly adding up on all spatial scales (Eq. 18): As is necessary finite integral, it would be define low limit s− correspond to photo receptors size, and high limit s+ would be vision field size. Substituting s = expσ, (Eq. 19): 3. Local adjustment of the brightness scale: This adjustment results in the brightness indentation. It can be described as a deflection of the assembled map that leads to a dynamic limited range of the brightness map, which does not seriously affect the local contrast information.

Discrete algorithm of the luminance-brightness mapping for images
As a pre-processing, the images (X(x, y)) that are going to be employed, must be surrounded by a uniform region with a constant luminance L 0 , which will be the average value for the image. To avoid unwanted variations, the images are normalized so that the pixels intensity remains in the interval [0, 1]. To carry out the first stage of multi-scale representation, a sampling must take place with distances that increase exponentially, i.e., the (Eq. 19) is a Riemanns sum of terms h A (x, y, σ i ), which are taken in equidistant positions of the scale parameter s. Since the deployed luminance variations only occur in a limited area by a homogeneous region, one can capture those variations by using a limited number of scales. In the discrete expressions, the index i indicates the scale and takes the values i ∈ 1, 2, 3, . . . , 9 . That is why the escalated signal with an index 1, h (1) A (x, y), is the signal with the finest scale, while the one with an index 9, is the signal with the fullest scale. The central and peripheral responses V c (x, y; s i ) and V s (x, y; s i ) respectively, are obtained by calculating the convolution between the image and the filters modelling the receptive fields. The ensemble map A(x, y), Eq. 20, is calculated through a sum of the escalated signals h (i) A (x, y) and an offset term A G , as expressed in Eq. 21: with where β = 0.1, δ = −5.0 and Ã G = 1.22 according to Schouten (1993). The minimal and maximal values Â min (x, y) and Â max (x, y) are calculated through the Eqs. 22 and 23, respectively: Finally, the brightness map is obtained by Eq. 24.

Perceptive mask algorithm
The steps for generating the perceptive mask are: where k 0 is a constant, C min represents the minimal contrast present when a luminance level L min is present, and when the eye has a maximal sensibility to the contrast (Escalante-Ramirez et al. 2003) and α is a constant that takes values in the interval [0, 1]. where: k 1 is a constant. Figure 2 shows the Barbara and Pirate images normalized masks.

Watermark insertion algorithm
The watermark insertion process is now described, and can be observed in Fig. 3. 1. Normalize the original image X(x, y) to obtain the normalized image X normalized . 2. Create the 2D watermark, with the same size that the normalized image X normalized , according to the following procedure: (a) Generate p i one-dimensional (1D) binary pseudo-random sequences, by using a private key k, where i = 1, . . . , l and l is the number of bits in the message that is used as a watermark, for example we use l = 64 and l = 104. Each sequence has values −1, 1. p i represents one-dimensional array of arrays. For example if l = 64, we have p 1 , p 2 , p 3 ,…, p 64 . And p 1 contains an array 256 × 256 size and so on. (b) Create the mark W 1 modulating (DS-CDMA) the message with the p i sequences generated previously, i.e. Eq. 28. In this case the size is 256 × 256.
where m i is the i-th bit of the watermark. (c) Generate the null Hermite coefficients Y k,l . (d) Create the perceptive mask M and normalize it. (If the binary mask is employed, only one template of white pixels must be generated). (e) The insertion of the watermark in the Hermite coefficients Y k,l , must be done according to Eq. 29:  (x8) where: α is a strength control parameter to insert the watermark, W 1 is the modulated watermark, Ỹ k,l is the modified coefficient and (i, j) are the pixels coordinates. (f ) Calculate the inverse transform of the coefficients to obtain Y HT . (g) Multiply Y HT with the perceptive mask M thus obtaining the final watermark, Eq. 30: 3. Apply the inverse normalization process to W f so as to obtain W. 4. The final watermark is additively inserted in the original image, Eq. 31:

Watermark extraction algorithm
The watermark extraction method is blind, because it is a correlated one. It consists in: 1. Applying the normalization process in the embedded image X m to obtain X m . 2. Decoding the message of X m as follows: (a) Generate the patterns p i , by using the same key k and the same procedure stated in step 2 of the watermark insertion process. (b) Calculate the HT of X m , to get the coefficient Z k,l . (c) Decode the message (watermark) bit by bit, using a correlated detector between the patterns p i , and the coefficient Z k,l , Eq. 32: where corr represents the correlation between Z k,l and p i . (d) Convert to its ASCII equivalent, the obtained message from the previous step, and compare it to the original message.

Test and results
In order to evaluate the performance of the proposal, various tests of insertion, extraction and robustness were performed against common processing and geometric transformations attacks. The perceptive and binary masks (Dong et al. 2005) were used in 26 different images, each one with 512 × 512 dimensions. The results employ a watermark length of 64 bits. The metrics employed to evaluate the quality of the embedded images, as well as the watermark extraction, are: peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM) average, the normalized crossed correlation , and Bit Error Rate (BER) was utilized to determine the efficiency of the watermark extraction.

Watermark insertion and extraction
A watermark (watermar) was inserted on each of the 26 images so as to obtain the averages for each metric, as well as the modified bits average. Both the perceptive and the binary mask were applied to determine which one contributes to improving the algorithm performance. The results are shown in Table 1. (The value of α strength control parameter was defined experimentally). After observing Table 1, it is possible to conclude that the binary mask allows for better results in the value PSNR (over 45 dB in average) when compared to the value obtained when using the perceptive mask (ca. 40 dB). Regarding the MSSIM and the normalized crossed correlation, we got an average of 0.99 in both cases, which indicates that the watermark is not perceptible to the human eye. Nevertheless, concerning the watermark (32) m i = 1 corr ≥ 0 0 otherwise extraction, we obtained better results when using a perceptive mask, because the modified bits average is lower than the unit; while when the binary mask was applied, the modification average was over 3 bits. Also, when analyzing each image results independently, we can observe that the worst extraction case with the binary mask was with a modification of 14 bits, while with the perceptive mask, the maximal modification was with 7 bits. Figures 4 and 5 show only the embedded images using a binary and a perceptive mask, respectively, Barbara and Pirate. However, in Table 2 the metric values for six  Table 2, it is possible to conclude that a better BER is obtained by applying the perceptive mask method, given that in these five images, at least, not even one bit of the original mark was modified. There are some images that do exhibit changes, but the performed tests allow us to determine that they can alter up to 2 bits of the recovered mark to ensure that it remains valid. This means that, independently of the mask used, if the watermark extraction shows up to 2 modified bits-the equivalent to a BER of 0.03125-it is still readable, and consequently, can be considered a successful extraction.

Robustness
To verify the robustness efficacy that the present proposal has, each one of the images had to endure different attacks of common processing and geometric transformations. The attacks executed were: Gaussian filter-the window size N × N went from 1 to 9; median filter-the window size N × N went from 2 to 9; Addition of Gaussian noisethe variance went from 0 to 0.005; Addition of Salt-and-Pepper noise-in this case, the noise density was modified from 0 to 0.1 with increments of 0.01; JPEG compressionthe quality factor was applied from 0 to 100 with increments of 5; Scale-the used factor of scale went from 20 to 200 % in increments of 10 %; Rotation-the variation of the rotation went from 0° to 180°, with increments of 5°. Finally, shearing, was applied, whose factor was from 1 to 1 in increments of 0.04. As a sample, Figs. 6 and 7 show the attacks that Barbara and Pirate images overcame. Both masks were used and the frame of reference is the maximum BER that can be obtained in the watermark extraction.
According to the graphics contained in Figs. 6 and 7, we can conclude that using a perceptive mask constitutes a more robust technique against various attacks. To both, the most difficult attack to surmount was the median filter. A BER of 0.03125 was taken into account as a limit to consider the watermark extraction successful. Because, even when there is a modification up to 2 bits, it remains readable when converted to its ASCII equivalent. In the case of the Pirate image, the performance of the algorithm is superior in each attack than that of the binary mask. In the case of the Barbara image, we can see that the pattern reoccurs whenever there was a geometric attack.

Watermark length modification
We performed tests by increasing the length of the watermark up to 104 bits (GOC-S800116MDF). This was done aiming to prove the bits limit that can be applied for the perceptive mask proposal. Table 3 shows the obtained metrics values for the images Lena, Pirate, Blondie and Swan, as well as the recovered mark in each case.
The results shown in Table 3 demonstrate that the PSNR values hold an average of 40 dB, even though that the Swan image decreases to 38 dB. Also, the images do not exhibit any visible changes to the human eye. In regards to the watermark extraction, even when not all images got a BER of zero-i.e. Lena and Pirate images-they still can be considered as successful extractions since there was only a modification of 1 bit. Taking into account the same parameters used for the attacks in the previous section, Table 4  indicates the type and total amount of attacks applies, as well as the surmounted quantity by each one of the images. Concerning the robustness feature, according to Table 4, the common processing attacks are the ones that are more affected, not so the geometric attacks. The Swan image maintains a similar robustness to that obtained in a watermark with a length of 64 bits, except in the shearing. We could state that we possess a robust technique against common processing and geometric transformation attacks, allowing for watermark lengths up to 100 bits, and taking into consideration the impact that the technique exhibited against common processing attacks when the length increased to 104 bits. Other watermarking studies served as a reference to determine the effectiveness of the proposed method. For example, the method described in Cedillo et al. (2008) also uses the normalized approach. It reports a BER of 0.04 without facing any attack, and when applying a watermark 64 bits long. In comparison with the present results, in the case of the Lena image, the BER is 0 with a watermark of the same length; when increased to 104 bits, the BER we obtained was of 0.019237, confirming that our method allows for the watermark extraction with a lower rate of erroneous bits and a longer watermark. Now, Tian et al. (2010) employs a watermark method through spread spectrum, the Radon transform, and the DCT. It actually proves to be robust, when comparing their results to the normalized approach; they have a BER = 0 against the rotation attack from 6° to 6° (with increments of 6°); in the scale of 0.5 to 2.0 (with increments of 0.1), as well in some JPEG compression and median filter. Nevertheless they obtained an average of 31.5 and 30.0 dB PSNR when utilizing a watermark of 50 and 100 bits long.
The procedures described in this paper show that we achieved robustness while facing different types of attacks, and that the PSNR values remain close to the 40 dB. The work undertaken by Sridevi et al. (2011) includes a logo as a watermark, which entail more length (4096 bits). Their purpose is to manage a robust method, so they calculate  the normalization of the image that must be embedded, in order to later break it down in DWT coefficients; they work with the median frequency coefficients, to which later they add the calculation of the DCT, and then get modified with the watermark. Various attacks are applied: Gaussian noise, rotation, scale, histogram equalization, and contrast modification. Their findings show that it is neither a robust nor a safe technique, since the quality of each image vanishes. They also test the different wavelet coefficients, and the PSNR values they obtained were very low (around 13 and 33 dB), without even applying any attack. The work developed by Nah et al. (2012)  While trying to preserve the visual quality of the embedded image, they make use of an adaptive insertion method. That proposal is based on the RHFMs calculation in order to improve the invariance properties of the best-preserved moments during the watermark insertion procedures. Thus accomplishing greater robustness against attacks (geometric and common processing). However, the disadvantage they face is that it becomes vulnerable to falsification attacks. The employed images have 256 × 256 dimensions, and the resulting PSNR values oscillate between 55 and 40 dB for different lengths of the watermark. The robustness is measured with the BER. For example, for a 128 bits long mark, in the case of rotation with angles of 5°, 10°, 15° and 20°, the BER = 0. In our case, even when we use base images with larger dimensions, we acquire BER = 0 for more rotations. The same happens when facing JPEG compression attacks, because in Singh and Ranade (2013), only when the compression factors go up to 30, the BER = 0; from there downwards, the BER starts to raise. In our approach, the watermark is successfully extracted with lower compression factors.

Conclusions
This paper presents a watermarking technique that combines the Hermite transform, the normalization process to achieve robustness against geometric transformations, and a perceptive mask. Thus demonstrating that we achieved a robust method that improves its performance when compared to the findings related to the employment of the binary mask (Baaziz et al. 2008). We proved that it is possible to use different watermark lengths (approximately 64 bits to 100 bits), and that even when the robustness may be compromised when facing common processing attacks; if the watermark length increases, the robustness can hold against geometric attacks. Moreover, in order to get a message or short code that enables the identification of the digital image owner, it would not be necessary to have a particularly large watermark. Albeit is true that there are applications with a rather large watermark (Lai 2011;Maity and Kundu 2011)-over 1000 bits, they are usually only pseudo-random sequences that are not extracted as such, but solely detected; or like in the case of the work put forward by Sridevi et al. (2011), which uses a logo as a watermark, and did not overcome any attack implemented to evaluate the technique. In our model, the application entails greater complexity just by trying to extract the message employed as the embedded image. The parameter α value (strength control parameter to insert the watermark) is the one that allows for the changes in the image to remain imperceptible to the human eye. In addition, the perceptive mask helps us to detect those zones that are susceptible to change without making them visible. Our findings show that the reported PSNR values are within the 40 dB, which indicates that the image has not suffered significant visual alterations. Furthermore, greater robustness was achieved in every attack, both of common processing and geometric, in comparison to when a binary mask was used. It certainly reaches high PSNR values, but it does not guarantee success against attacks. We used different images so as to demonstrate that this technique can be applied to every type of image in gray scale, without limiting to those commonly utilized for this kind of applications (Lena, Baboon, Barbara, Blondie, Peppers, etc.). As we took various watermarking studies as a reference, we can confidently claim that the described method complies with the watermark robustness and invisibility, unlike Tian et al. (2010). It does report BER = 0 against rotation, scale, filtering, and noise attacks; however, the PSNR values is low (30 dB), with watermark lengths between 50 and 100 bits. Something similar happens with the approach explained in Cedillo et al. (2008), which has a BER = 0.4 without any attacks, and with a 64 bits long watermark. It is evident that the improvement in the watermarking algorithms is related to robustness and quality of the embedded image. We have, for instance, the algorithm described in Amiri and Jamzad (2014): it studies the degradation that the watermarked images suffer when printed or scanned, using a model that replicates the distortions produced by the printer and the scanner. They use the DWT, the DCT, and a genetic algorithm. The lengths of the utilized watermarks are 72, 96, and 128 bits. The metrics employed to evaluate robustness are PSNR, SSIM, and BER. Their findings show a robust algorithm, but due to their images complexity classification-based on the Qaud-tree concept, it may happen that some of those images are wrongly categorized. It results in an unsatisfactory performance of the algorithm. A significant aspect that must be taken into account is that, when dealing with relatively small watermarks, the uncertainty of their recognition is high whenever less than 60 % is properly extracted. Nevertheless, according to our findings, it becomes clear that when an alphanumeric code is used-such as personal identification number-this cannot be applied: a modification of more than 2 bits would alter that code, and it could be mistaken for that of someone else. Such consideration can be taken into account in those algorithms whose watermarks are represented by logos. For example, in the type of works that use a logo as the embedded image, and that consider the HVS features (Lai 2011;Maity and Kundu 2011), robust algorithms appear. But, in order to use them as a reference, we have to acknowledge that the watermark extraction should not necessarily be with BER = 0 (because of the information quantity used as a watermark); the recognition of the extracted logo would suffice. Therefore, we have determined that our proposal is robust in regards to the bits quantities employed as a watermark, due to the fact that the modified bits-with or without facing attacks-in the embedded images are minimal, or better, they lack modifications altogether. It must be taken into account that the length of the watermark can be augmented and still hold high robustness rates.