- Methodology
- Open Access
- Published:

# A methodology for stochastic analysis of share prices as Markov chains with finite states

*SpringerPlus*
**volume 3**, Article number: 657 (2014)

## Abstract

Price volatilities make stock investments risky, leaving investors in critical position when uncertain decision is made. To improve investor evaluation confidence on exchange markets, while not using time series methodology, we specify equity price change as a stochastic process assumed to possess Markov dependency with respective state transition probabilities matrices following the identified state pace (i.e. decrease, stable or increase). We established that identified states communicate, and that the chains are aperiodic and ergodic thus possessing limiting distributions. We developed a methodology for determining expected mean return time for stock price increases and also establish criteria for improving investment decision based on highest transition probabilities, lowest mean return time and highest limiting distributions. We further developed an R algorithm for running the methodology introduced. The established methodology is applied to selected equities from Ghana Stock Exchange weekly trading data.

## Background

Stock market performance and operation has gained recognition as a significantly viable investment field within financial markets. We most likely find investors seeking to know the background and historical behavior of listed equities to assist investment decision making. Although stock trading is noted for its likelihood of yielding high returns, earnings of market players in part depend on the degree of equity price fluctuations and other market interactions. This makes earnings very volatile, being associated with very high risks and sometimes significant losses.

In stochastic analysis, the Markov chain specifies a system of transitions of an entity from one state to another. Identifying the transition as a random process, the Markov dependency theory emphasizes "memoryless property" i.e. the future state (next step or position) of any process strictly depends on its current state but not its past sequence of experiences noticed over time. Aguilera et al. (1999) noted that daily stock price records do not conform to usual requirements of constant variance assumption in conventional statistical time series. It is indeed noticeable that there may be unusual volatilities, which are unaccounted for due to the assumption of stationary variance in stock prices given past trends. To surmount this problem, models classes specified under the Autoregressive Conditional Heteroskedastic (ARCH) and its Generalized forms (GARCH) make provisions for smoothing unusual volatilities.

Against the characteristics of price fluctuations and randomness which challenges application of some statistical time series models to stock price forecasting, it is explicit that stock price changes over time can be viewed as a stochastic process. Aguilera et al. (1999) and Hassan and Nath (2005) respectively employed Functional Principal Component Analysis (FPCA) and Hidden Markov Model (HMM) to forecast stock price trend based on non-stationary nature of the stochastic processes which generate the same financial prices. Zhang and Zhang (2009) also developed a stochastic stock price forecasting model using Markov chains.

Varied studies (Xi et al.2012; Bulla et al.2010; Ammann and Verhofen2006; and Duffie and Singleton1993) have researched into the application of stochastic probability to portfolio allocation. Building on existing literature, we assume that stock price fluctuations exhibit Markov’s dependency and time-homogeneity and we specify a three state Markov process (i.e. price decrease, no change and price increase) and advance the methodology for determining the mean return time for equity price increases and their respective limiting distributions using the generated state-transition matrices. We further replicate the case for a two-state space i.e. decrease in price and increase in price. Based on the methodology, we hypothesize that;

Equity with the highest state transition probability and least mean return time will remain the best choice for an investor.

We explore model performance using weekly historical data from the Ghana Stock Exchange (GSE); we set up the respective transition probability matrix for selected stocks to test the model efficiency and use.

## Review of theoretical framework

### Definition of the Markov process

The stochastic process {*X* (*t*), *tϵT*} is said to exhibit Markov dependence if for a finite (or countable infinite) set of points (*t*_{0}, *t*_{1}, … , *t*_{
n
}, *t*), *t*_{0} < *t*_{1} < *t*_{2} < … < *t*_{
n
} < *t* where *t*, *t*_{
r
}*ϵT* (*r* = 0, 1, 2, …, *n*).

From the property given by equation (1), the following relation suffices

where *t*_{
n
} < *τ* < *t* and *S* is the state space of the process {*X* (*t*)}.

When the stochastic process has discrete state and parameter space, (2) takes the following form: for *n* > *n*_{1} > *n*_{2} > … > *n*_{
k
} and *n*, *n*_{
r
}*ϵT* (*r* = 1, 2, …, *k*)

A stochastic process with discrete state and parameter spaces which exhibits Markov dependency as in (3) is known as a Markov Process.

From the Markov property, for *n*_{
k
} < *r* < *n* we get

equations (2) and (4) are known as the Chapman-Kolmogorov equations for the process.

### n-step transition probability matrix and n-step transition probabilities

If *P* is the transition probability matrix of a Markov chain {*X*_{
n
}, *n* = 0, 1, 2, …} with state space *S*, then the elements of *P*^{n} (*P raised to the power n*),{P}_{ij}^{\left(n\right)}\phantom{\rule{0.25em}{0ex}}i,j\u03f5S are the n-step transition probabilities where *P*_{
ij
}^{(n)} is the probability that the process will be in state *j* at the *n*^{th} step starting from state *i*.

The above statement can clearly be shown from the Chapman-Kolmogorov equation (4) as follows; for a given *r* and *s*, write

Set *r* = 1, *s* = 1 in the above equation to get

Clearly, *P*_{
ij
}^{(2)} is the (*i*, *j*)*th* element for the matrix product *P* × *P* = *P*^{2}. Now suppose *P*_{
ij
}^{(r)} (*r* = 3, 4, …, *n*) is the (*i*, *j*)^{th} of *P*^{r} then by the Kolmogorov equation, the

which again can be seen as the (*i*, *j*)^{th} element of the matrix product *P*^{r}*P* = *P*^{r+1}. Hence by induction, *P*_{
ij
}^{(n)} is the (*i*, *j*)^{th} element of *P*^{n}*n* = 2, 3, ….

To specify the model, the underlying assumption is stated about the identified n-step transition probability (stating without proof).

The transition probability matrix is accessible with existing state communication. Further, there exists recurrence and transience of states. States are also assumed to be irreducible and belong to one class with the same period which we take on the value 1. Thus the states are aperiodic.

### Limiting distribution of a Markov chain

If *P* is the transition probability matrix of an aperiodic, irreducible, finite state Markov chain, then

Where ** α** = [

*α*

_{1},

*α*

_{2}, …,

*α*

_{ m }] with 0 <

*α*

_{ j }< 1 and{\displaystyle \sum _{j=1}^{m}{\mathit{\alpha}}_{\mathit{j}}=1}. See Bhat (1984). The chain with this property is said to be ergodic and has a limiting distribution

**π**. The transition probability matrix

*P*of such a chain is primitive.

### Recurrence and transience of state

Let *X*_{
t
} be a Markov Chain with state space *S*, then the probability of the first transition to state *j* at the *t*^{th} step starting from state *i* is

Thus the probability that the chain ever returns to state *j* is

and{\mu}_{ij}={\displaystyle \sum _{t=1}^{\infty}t{f}_{ij}^{\left(t\right)}} is the expected value of first passage time. Further, if *i* = *j*, then;

and{\mu}_{ii}={\mu}_{i}={\displaystyle \sum _{t=1}^{\infty}t{f}_{ii}^{\left(t\right)}} is the mean recurrence time of state *i* if state *i* is recurrent.

A state *i* is said to be recurrent (persistent) if and only if, starting from state *i*, eventual return to this state is certain. Thus state *i* is recurrent if and only if

A state *i* is said to be transient if and only if, starting from state *i*, there is a positive probability that the process may not eventually return to this state. This means *f*_{
ii
}^{*} < 1

## Model specification

### Defining the problem (Equity price changes as a three-state Markov process)

Let *Y*_{
t
} be the equity price at time *t* where *t* = 0, 1, 2, …, *n* (*t* is measured in weekly time intervals). Further, we define *d*_{
t
} = *Y*_{
t
} - *Y*_{t-1} which measures the change in equity price at time *t*. Considering each closing week’s price as discrete time unit for which we define a random variable *X*_{
t
} to indicate the state of equity closing price at time *t*, a vector spanned by 0, 1, 2

Next, we define an indicator vector

Then clearly for the outcome of *X*_{
t
} we have

wheren={\displaystyle \sum _{i=0}^{2}{n}_{i}}. Hence estimates of the probability that the equity price reduce, did not change and increased can be obtained respectively by

For the stochastic process *X*_{
t
} obtained above for *t* = 1, 2, …, *n* we can obtained estimates of the transition probabilities *P*_{
ij
} = Pr (*X*_{
t
} = *j*|*X*_{t-1} = *i*) for *j* = 0, 1, 2 by defining

where *k* + 1 is the number of states of the chain.

Therefore, an estimate for the transition matrix for *k* = 2 is

Suppose the data in Additional file1 is uploaded as .csv, then *R* code for computing estimates in (12b) can be found in Additional file2 (three-state Markov Chain function column).

### For a two-state Markov process

We maintain the above defined terms and set

further set *i*, *j* = 0, 1, (for *k* = 1) and apply (9), (10), (11), (12a), and (12b) sequentially, we obtain

without loss of generality, suppose *X*_{
t
} has state space *s* = {0, 1} and transition probability matrix

Then, *f*_{00}^{(1)} = 1 - *θ* and for *n* ≥ 2, we have;

By the Markov property and the definition of conditional probability, we have

solving{\mu}_{0}={\mu}_{00}={\displaystyle \sum _{t=1}^{\infty}t{f}_{00}^{\left(t\right)}} to obtain the respective mean recurrence time. Thus,

Similarly, we have

With the corresponding R algorithm shown in Additional file2 (two-state Markov Chain function column).

## Generating eigen vectors for computation of limiting distributions

After the transition probabilities are obtained for both two-state and three-state chains, the R codes in the lower portions of columns one and two in Additional file2 were used to generate the respective eigen vectors for computation of limiting distributions.

## Findings and discussions

### Data structure and summary statistics

Data used for this paper are weekly trading price changes for five randomly selected equities on the Ghana Stock Exchange (GSE), each covering period starting from January 2012-December 2013. We obtain the weekly price changes using the relation *d*_{
t
} = *Y*_{
t
} - *Y*_{t-1} where *Y*_{
t
} represents the equity closing price on week *t* and *Y*_{t-1} is the opening price for the immediate past week. The equities selected include Aluworks (ALW), Cal Bank (CAL), Ecobank Ghana (EBG), Ecobank Transnational Incorporated (ETI), and Fan Milk Ghana Limited (FML).

In all, 104 (52 weeks) observational data points where obtained. Summary statistics on all respective equities on the GSE are shown in Table 1. We present summaries on the respective number of weekly price decreases, no change in price and price increase. Descriptive statistics for each equity weekly price change is also shown.

Overall, the frequency of "no price change" was more experienced over the study period. The lowest and highest price changes for the trading period are respectively -4.19 and 9.54. The estimated values of the kurtosis and skewness are also shown. Figure 1 presents a plot of the average weekly equity price changes of respective equities listed on the GSE over the study period in comparison to the standard deviation of weekly price changes.

### Empirical results on model application (three-state Markov chain)

For the five randomly selected equities, the transition probabilities of the equities are presented as follows. These were obtained from equation (12a) defining{P}_{ij}=\frac{{n}_{ij}}{{n}_{i}} w.r.t. the three-state space Markov process. A 3 × 3 transition matrix is obtained for respective equities as defined by (12b).

From the results of the algorithm, we select 5 equities with which we implement the hypothesis. They include;

Clearly,{\widehat{P}}_{ij}>0 for all *i*, *j* = 0, 1, 2 indicating irreducibility of the chains for all equities. Hence state 0 for all the equities is aperiodic and since periodicity is a class property, the chains are aperiodic. These imply that the chains are ergodic and have limiting distributions.

Figure 2 presents the *t* - *step* transition probabilities for share price increases based on the assumption of time-homogeneity. This shows linear plot of transition probabilities for *P*_{22}^{(t)} for each selected stock as computed above. It measures the probability that a share at initial state (*i. e. state* 2) at inception transited to *state* 2 again after *t weeks*. Regarding the plot of the transition probabilities, the logical reasoning is to choose the equity which has the highest *P*_{22}.

From the plot, *FML* share is the best choice for the investor since the probability that it increases from a high price to another higher price is higher when compared to the other selected stocks. ALW recorded the least probability of transition within the period. Comparing CAL to EBG, the methodology shows that CAL shares maintain high probability of moving to higher prices as compared to EBG shares although the later started with high prices at inception.

Using equation (5), the limiting distributions of the respective equities were computed. These probabilities measure the proportions of times the equity states within a particular state in the long run. From Table 2, ALW equity has 14% chance of reducing and 11% chance of increasing in the long run. It however has 75% chance of no change in price. Similarly, in the long run, FML equity has 20% chance of reducing, 39% chance of experiencing no change in price and 42% chance of increasing in price. It is easily seen that for this instance, FML equity has the highest probability of price increase in the long run.

### Empirical model application (the two-state Markov process)

Defining a two-state space Markov process following from equation (13), we derive the state transition probabilities. The two-state transition probability matrix entries are indicated in Table 3 below;

Applying equations (15) and (16) to the transition probabilities, we obtain the respective mean return time of the selected equities. These are shown in Table 4 below;

Mean return time is measured in weeks with *μ*_{
ij
} as defined in (15) and (16). The mean return time measures the expected time until the equity price’s next return to the state it was initially in at time 0. Figure 3 presents a plot of expected return time of the selected stocks at *μ*_{11}. This determines the expected time until the next increase in share. We expect that the choice of share should not only have the highest transition probability, but should relatively possess a lower mean return time. Possessing the least mean return time for *μ*_{11} signifies the shortest return time to a price increase.

## Conclusion

The Markov Process provides a credible approach for successfully analyzing and predicting time series data which reflect Markov dependency. The study finds that all states obtained communicate and are aperiodic and ergodic hence possessing limiting distributions. It is distinctive from Figures 1 and2 (expected return time and t-step state transition probabilities of equity price increases i.e. *P*_{
ij
} transition from state 2 to state 2) that the investor gains good knowledge about the characteristics of the respective equities hence improving decision making in the light return maximization. With regards to the selected stocks, FML equity recorded the highest state transition probabilities, highest limiting distribution but the second lowest mean return time to price increases (i.e. 3.224 weeks).

Our suggested use of Markov chains as a tool for improving stock trading decisions indeed aids in improving investor knowledge and chances of higher returns given risk minimization through best choice decision. We showed that the proposed method of using Markov chains as a stochastic analysis method in equity price studies truly improves equity portfolio decisions with strong statistical foundation. In our future work, we shall explore the case of specifying an infinite state space for the Markov chains model in stock investment decision making.

## References

Aguilera MA, Ocaña AF, Valderrama JM Applied stochastic models in business and industry. In

*Stochastic modeling for evolution of stock prices by means of functional principal component analysis*. John Wiley & Sons, Ltd., New York; 1999.Ammann M, Verhofen M Working paper series in Finance. No. 20.

*The effect of market regimes on style allocation*2006.http://www.finance.unisg.chBhat UN Wiley series in Probability & Mathematical Statistics.

*Elements of applied stochastic processes*2nd edition. 1984.Bulla J, Mergner S, Bulla I, Sesboüé A, Chesneau C Munich Personal RePEc Archive.

*Markov-switching asset allocation: do profitable strategies exist?*2010. MPRA Paper no. 21154.http://mpra.ub.uni-muenchen.de/21154/Duffie D, Singleton JK: Simulated moments estimation of Markov models of asset prices.

*Econometrica*1993, 61(4):929-952. 10.2307/2951768Hassan RM, Nath B Proceedings of the 2005 5th international conference on intelligent systems design and applications (ISDA’05). In

*Stock market forecasting using hidden Markov model: a new approach*. IEEE Computer Society, Washington, DC, USA; 2005.Xi X, Mamon R, Davison M: A higher-order hidden Markov chain-modulated model for asset allocation.

*J Math Model Algorithm Oper Res*2012, 13(1):59-85.Zhang D, Zhang X: Study on forecasting the stock market trend based on stochastic analysis method.

*Int J Bus Manage*2009, 4(6):163-170.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

FOM introduced the idea, undertook the theoretical and methodology development. ENBQ developed the codes and helped in the analysis and typesetting of mathematical equations. RAL also helped in the analysis and the general typesetting of the manuscript. All authors read and approved the final manuscript.

## Electronic supplementary material

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Mettle, F.O., Quaye, E.N.B. & Laryea, R.A. A methodology for stochastic analysis of share prices as Markov chains with finite states.
*SpringerPlus* **3, **657 (2014). https://doi.org/10.1186/2193-1801-3-657

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/2193-1801-3-657

### Keywords

- Markov process
- Transition probability matrix
- Limiting distribution
- Expected mean return time
- Markov chain