For empirical testing, we use daily values of Sensex and Nifty, the major indices traded in India and together constitute 99.9 percent of total market capitalization. The Sensex data is from January 1991 to March 2013 while Nifty data spans from January 1994 to March 2013. To capture changing efficiency or evolving nature of the market, we divide the whole sample into two yearly subsamples^{d}. The present study employs both linear and nonlinear tests for empirical testing of AMH. The sample characteristics and a set of the tests make the results of the present study robust and reduce the risk of overemphasizing the generality of the findings. The following subsections offer a brief description of these tests.

### 2.1 Linear Tests

#### 2.1.1 Autocorrelation Test

Autocorrelation estimates are used to test the hypothesis that the process generating the observed return is a series of independent and identical distribution (*iid*) of random variables. It helps to evaluate whether successive values of serial correlation are different from zero. To test the joint hypothesis that all autocorrelation coefficients *ρ*_{
k
} are simultaneously equal to zero, we use Ljung and Box’s (1978) portmanteau Q statistic. The test statistic is

\mathit{LB}=n\left(n+2\right){\displaystyle {\sum}_{k=1}^{m}\left(\frac{{\widehat{\mathrm{\rho}}}_{k}^{2}}{\mathrm{n}-k}\right)}

(1)

where *n* is the number of observations, *m* lag length. The test follows a chi-square (χ^{2}) distribution.

#### 2.1.2 Runs Test

Runs test is one of the prominent nonparametric tests of the random walk hypothesis (RWH). A run is defined as the sequence of consecutive changes in the return series. If the sequence is positive (negative), it is a positive (negative) run and if there are no changes in the series, then a run is zero. The expected runs are the change in returns required, if a random process generates the data. If the actual runs are close to the expected number of runs, it indicates that the returns are generated by a random process. The expected number of runs (ER) is computed as

\mathrm{ER}=\frac{\mathrm{X}\phantom{\rule{0.25em}{0ex}}\left(\mathrm{X}-1\right)\phantom{\rule{0.25em}{0ex}}-\phantom{\rule{0.25em}{0ex}}{\displaystyle {\sum}_{i=1}^{3}}\phantom{\rule{0.25em}{0ex}}{\mathrm{c}}_{\mathrm{i}}^{2}}{\mathrm{{\rm X}}}

(2)

where X is the total number of runs, c_{
i
} is the number of returns changes of each category of sign (*i =* 1, 2, 3). The ER in Equation (2) has an approximate normal distribution for large X. Hence, to test the null hypothesis, we use standard Z statistic.

#### 2.1.3 Variance Ratio Test

Lo and MacKinlay (1988) proposed the variance ratio test which is capable of distinguishing between several interesting alternative stochastic processes. Under RWH for stock returns r_{
t
}, the variance of r_{
t
} + r_{t-1} are required to be twice the variance of r_{
t
}. Let the ratio of the variance of two period returns, r_{
t
}(2) ≡ r_{
t
} - r_{t - 1}, to twice the variance of a one-period return r_{
t
}. Then variance ratio VR (2) is

\begin{array}{l}\mathrm{VR}\left(2\right)=\frac{\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left[{\mathrm{r}}_{\mathrm{t}}\left(2\right)\right]}{2\phantom{\rule{0.25em}{0ex}}\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left[{\mathrm{r}}_{\mathrm{t}}\right]}\phantom{\rule{0.25em}{0ex}}=\phantom{\rule{0.5em}{0ex}}\frac{\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left[{\mathrm{r}}_{\mathrm{t}}+{\mathrm{r}}_{t-1}\right]}{2\phantom{\rule{0.25em}{0ex}}\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left[{\mathrm{r}}_{\mathrm{t}}\right]}\\ \phantom{\rule{3em}{0ex}}=\frac{2\phantom{\rule{0.25em}{0ex}}\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left[{\mathrm{r}}_{t}\right]+2\phantom{\rule{0.25em}{0ex}}\mathrm{Cov}\phantom{\rule{0.5em}{0ex}}\left[{\mathrm{r}}_{\mathrm{t}},{\mathrm{r}}_{t-1}\right]}{2\phantom{\rule{0.25em}{0ex}}\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left[{\mathrm{r}}_{\mathrm{t}}\right]}\\ \mathrm{VR}\left(2\right)=1+\rho \left(1\right)\end{array}

(3)

where *ρ* (1) is the first order autocorrelation coefficient of returns {r_{
t
}}. RWH which requires zero autocorrelations holds true when VR (2) =1. The VR (2) can be extended to any number of period returns, *q*. Lo and MacKinlay (1988) showed that the *q* period variance ratio satisfies the following relation:

\mathrm{VR}\phantom{\rule{0.25em}{0ex}}\left(q\right)\phantom{\rule{0.25em}{0ex}}=\frac{\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left[{\mathrm{r}}_{\mathrm{t}}\phantom{\rule{0.25em}{0ex}}\left(q\right)\right]}{q.\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left[{\mathrm{r}}_{\mathrm{t}}\right]}\phantom{\rule{0.25em}{0ex}}=\phantom{\rule{0.25em}{0ex}}1+2{\displaystyle \sum _{k=1}^{\mathrm{q}-1}}\begin{array}{c}\hfill \left(1-\frac{\mathrm{k}}{q}\right)\hfill \end{array}\phantom{\rule{0.5em}{0ex}}{\rho}^{\mathrm{k}}

(4)

where r_{
t
}(*k*) ≡ r_{
t
} + r_{t - 1} + … + r_{t - k + 1} and *ρ* (*k*) is the *k*^{th} order autocorrelation coefficient of {r_{
t
}} Equation (4) shows that at all *q,* VR (*q*) = 1. For random walk to hold, variance ratio is expected to be equal to unity. The test is based on standard asymptotic approximations. Lo-MacKinlay proposed Z (*q*) standard normal test statistic under the null hypothesis of homoscedastic increments and VR (*q*) =1. The rejection of RWH because of heteroscedasticity, a common feature of financial returns is not useful for any practical purpose. Hence, Lo-MacKinlay constructed a heteroscedastic robust test statistic Z* (*q*) which can be defined as

{\mathrm{Z}}^{*}\left(q\right)=\frac{\mathrm{VR}\phantom{\rule{0.25em}{0ex}}\left(q\right)-1}{{\mathrm{\u0444}}^{*}{\left(q\right)}^{1\setminus 2}}

(5)

which follows a standard normal distribution asymptotically. Thus, according to variance ratio test, the returns process is a random walk when the variance ratio at a holding period *q* is a unity. The variance ratio less than unity imply negative autocorrelation and greater than one indicates positive autocorrelation.

#### 2.1.4 Multiple Variance Ratio Test

The variance ratio of Lo and MacKinlay (1988) tests whether the variance ratio is equal to one for a particular holding period, whereas the RWH requires that variance ratios for all holding periods should be equal to one and the test should be conducted jointly over a number of holding periods. The sequential procedure of this test leads to size distortions and the test ignores the joint nature of random walk. To overcome this problem, Chow and Denning (1993) proposed multiple variance ratio test wherein a set of multiple variance ratios over a number of holding periods are tested to determine whether the multiple variance ratios (over a number of holing periods) are jointly equal to one. In Lo-MacKinlay test, under the null, VR (*q*) = 1, but in multiple variance ratio test, *M*_{
r
} = (*q*_{
i
}) = *VR* (*q*) – 1 = 0 which is generalized to a set of *m* variance ratio tests as

\left\{{\mathrm{M}}_{r}\phantom{\rule{0.25em}{0ex}}\left({q}_{i}\right)|\phantom{\rule{0.25em}{0ex}}i=1,2\dots ,m\right\}

(6)

Under RWH, multiple and alternative hypotheses are as follows

{\mathrm{H}}_{0i}={\mathrm{M}}_{r}=0\phantom{\rule{0.25em}{0ex}}\mathrm{for}\phantom{\rule{0.25em}{0ex}}i=1,\phantom{\rule{0.25em}{0ex}}2,\dots ,m

(7a)

{\mathrm{H}}_{1i}={\mathrm{M}}_{r}\phantom{\rule{0.25em}{0ex}}\left({q}_{i}\right)\ne 0\phantom{\rule{0.25em}{0ex}}\mathrm{for}\phantom{\rule{0.25em}{0ex}}\mathrm{any}\phantom{\rule{0.25em}{0ex}}i=1,2,\dots ,m

(7b)

The null of random walk is rejected when any one or more of H_{0i} is rejected. The heteroscedastic test statistic in Chow-Denning is:

\mathrm{CD}=\sqrt{\mathrm{T}}\underset{1\le i\le}{max|}{\mathrm{Z}}^{*}\phantom{\rule{0.25em}{0ex}}\left({q}_{i}\right)|

(8)

where Z*(*q*_{
i
}) is defined as in Equation (5). Chow-Denning test follows studentized maximum modulus, SMM(α, *m*, T), distribution with *m* parameters and T degrees of freedom. The RWH is rejected, if the value of the standardized test statistic CD. is greater than the SMM critical values at the chosen significance level.

### 2.2 Nonlinear tests

To test the presence of nonlinear dependence, we have carried out a set of nonlinear tests to avoid sensitivity of empirical results to the test employed. Before performing these tests, the linear dependence is removed from the data through fitting AR (*p*). The optimal lag is selected so that no Ljung-Box (LB) Q statistic for residuals extracted from AR (*p*) model is significant at 1 per cent level. Besides, we corrected the financial returns for heteroscedasticity. Therefore, rejection of null for residuals implies presence of nonlinear dependence in returns and market inefficiency.

#### 2.2.1 McLeod-Li Test

McLeod and Li’s (1983) portmanteau test of nonlinearity seeks to discover whether the squared autocorrelation function of returns is non-zero. The test statistic is

\begin{array}{l}{Q}_{\left(m\right)}=\frac{n\left(n+2\right)}{n-k}{\mathrm{\Sigma}}_{k-1}^{m}{r}_{a}^{2}\left(k\right)\\ {r}_{a}^{2}\left(k\right)=\frac{{\displaystyle {\sum}_{t-k+1}^{m}}{e}_{t}^{2}{e}_{t-k}^{2}}{{\displaystyle {\sum}_{t-1}^{n}}{e}_{t}^{2}}\phantom{\rule{0.25em}{0ex}}k=0,1,\dots n-1\end{array}

(9)

where {r}_{a}^{2} is the autocorrelation of the squared residuals and {e}_{t}^{2} is obtained after fitting appropriate AR (*p*). McLeod-Li tests for 2nd order nonlinear dependence.

#### 2.2.2 Tsay Test

Tsay (1986) proposed a test to detect the quadratic serial dependence in the data. Suppose *K =* k (k-1)/2 column vector contains all the possible cross products of the form r_{t-1} r_{t-j} where *ϵ* [i, k]. Thus, {v}_{t,1}={r}_{t-1}^{2};\phantom{\rule{0.25em}{0ex}}{v}_{2}={r}_{t-1},{r}_{t-2};{v}_{t,3}={r}_{t-1}{r}_{t-3};{v}_{t\mathrm{K}+1}={r}_{t-2}{r}_{t-3};{v}_{t,k+2}={r}_{t-2}{r}_{t-4}\phantom{\rule{0.25em}{0ex}}\dots and {v}_{t,k}={r}_{t-k}^{2}. Further, let {\widehat{v}}_{t,i} denote the projection of *v*_{t,i} on *r*_{t - 1} …, *r*_{t - k}, on the subspace orthogonal to *r*_{t - 1}, … *r*_{t - k} (the residuals from a regression of *v*_{t,i} on *r*_{t - 1}, …, *r*_{t - k}. Using following regression, the parameters *γ*_{1}, … *γ*_{
k
} are estimated:

{r}_{t-1}={\gamma}_{0}+{\mathrm{\Sigma}}_{i=1}^{k}{\gamma}_{i}{\widehat{v}}_{t,i}+{\u03f5}_{t}

(10)

The Tsay F statistic is for testing the null hypothesis that *γ*_{1}, … *γ*_{
k
} are all zero.

#### 2.2.3 ARCH-LM test

Engle (1982) proposed Lagrange Multiplier test to detect ARCH distributive. The test statistic based on R^{2} of an auxiliary regression, is defined as

{r}_{t}^{2}={\alpha}_{0}+{\mathrm{\Sigma}}_{i=1}^{M}{\alpha}_{i}{r}_{t-i}^{2}+{\u03f5}_{t}

(11)

When the sample size is *n*, under the null hypothesis of a linear generating mechanism for {e_{t}}, the test statistic NR^{2} for this regression is asymptotically distributed, {\chi}_{p}^{2}.

#### 2.2.4 Hinich bicorrelation test

The portmanteau bicorrelation test of Hinich (1996) is a third order extension of the standard correlation tests for white noise. The null hypothesis is that the transformed data {r_{t}} are realizations of a stationary pure noise process that has zero bicorrelation (H). Thus, under the null, bicorrelations (H) are expected to be equal to zero. The alternative hypothesis is that the process has some non-zero bicorrelations (third order nonlinear dependence).

H={\mathrm{\Sigma}}_{S=2}^{L}{\mathrm{\Sigma}}_{r=1}^{S-1}\left[{G}^{2}\left(r-s\right)/\left(T-S\right)\right]\sim \phantom{\rule{0.25em}{0ex}}{x}^{2}\left(L-1\right)\left(\frac{L}{2}\right)

(12)

where G\left(r,s\right)=\phantom{\rule{0.25em}{0ex}}{\mathrm{\Sigma}}_{k=1}^{T-S}\left[Z\left({t}_{k}\right)Z\left({t}_{k}+r\right)\left({t}_{k}+s\right)\right]. Z (t_{k}) are standard observations at time t = k, and L = T^{c} with 0 < c < 0.5^{e}.

#### 2.2.5 BDS test

Brock *et al.* (1996) developed a portmanteau test for time-based dependence in a series, popularly known as BDS (named after its authors). The BDS test uses the correlation dimension of Grassberger and Procaccia (1983). To perform the test for a sample of *n* observations {x_{1},..,x_{n}}, an embedding dimension *m*, and a distance *ϵ*, the correlation integral C_{m} (n, ϵ) is estimated by

{C}_{m}\left(n,\u03f5\right)\phantom{\rule{0.25em}{0ex}}=\phantom{\rule{0.25em}{0ex}}\frac{2}{\left(n-m\right)\phantom{\rule{0.25em}{0ex}}\left(n-m+1\right)}{\mathrm{\Sigma}}_{S=1}^{n-m}{\mathrm{\Sigma}}_{t=S+1}^{n-m+1}{I}_{m}\phantom{\rule{0.25em}{0ex}}\left({x}_{s},\phantom{\rule{0.5em}{0ex}}{x}_{t},\phantom{\rule{0.25em}{0ex}}\u03f5\right)

(13)

where *n* is sample size, *m* is embedding dimension and *ϵ* is the maximum difference between pairs of observations counted in estimating the correlation integral. The test statistic is:

{\mathrm{W}}_{\mathrm{m}}\left(\mathrm{\u03f5}\right)=\phantom{\rule{0.25em}{0ex}}\sqrt{\frac{\mathrm{n}}{{\widehat{\mathrm{V}}}_{\mathrm{m}}}}\left({\mathrm{C}}_{\mathrm{m}}\left(\mathrm{n},\phantom{\rule{0.25em}{0ex}}\mathrm{\u03f5}\right)-\phantom{\rule{0.25em}{0ex}}{\mathrm{C}}_{1}{\left(\mathrm{n},\phantom{\rule{0.25em}{0ex}}\mathrm{\u03f5}\right)}^{\mathrm{m}}\right)

(14)

The BDS considers the random variable √n(C_{m}(n, ϵ) – C_{1}(n, ϵ)^{m} which, for an *iid* process converges to the normal distribution as *n* increases. It has power against a variety of possible alternative specifications like nonlinear dependence and chaos. We use different *m*, and *ϵ* to estimate the BDS statistic.