Skip to main content

A methodology for stochastic analysis of share prices as Markov chains with finite states

Abstract

Price volatilities make stock investments risky, leaving investors in critical position when uncertain decision is made. To improve investor evaluation confidence on exchange markets, while not using time series methodology, we specify equity price change as a stochastic process assumed to possess Markov dependency with respective state transition probabilities matrices following the identified state pace (i.e. decrease, stable or increase). We established that identified states communicate, and that the chains are aperiodic and ergodic thus possessing limiting distributions. We developed a methodology for determining expected mean return time for stock price increases and also establish criteria for improving investment decision based on highest transition probabilities, lowest mean return time and highest limiting distributions. We further developed an R algorithm for running the methodology introduced. The established methodology is applied to selected equities from Ghana Stock Exchange weekly trading data.

Background

Stock market performance and operation has gained recognition as a significantly viable investment field within financial markets. We most likely find investors seeking to know the background and historical behavior of listed equities to assist investment decision making. Although stock trading is noted for its likelihood of yielding high returns, earnings of market players in part depend on the degree of equity price fluctuations and other market interactions. This makes earnings very volatile, being associated with very high risks and sometimes significant losses.

In stochastic analysis, the Markov chain specifies a system of transitions of an entity from one state to another. Identifying the transition as a random process, the Markov dependency theory emphasizes "memoryless property" i.e. the future state (next step or position) of any process strictly depends on its current state but not its past sequence of experiences noticed over time. Aguilera et al. (1999) noted that daily stock price records do not conform to usual requirements of constant variance assumption in conventional statistical time series. It is indeed noticeable that there may be unusual volatilities, which are unaccounted for due to the assumption of stationary variance in stock prices given past trends. To surmount this problem, models classes specified under the Autoregressive Conditional Heteroskedastic (ARCH) and its Generalized forms (GARCH) make provisions for smoothing unusual volatilities.

Against the characteristics of price fluctuations and randomness which challenges application of some statistical time series models to stock price forecasting, it is explicit that stock price changes over time can be viewed as a stochastic process. Aguilera et al. (1999) and Hassan and Nath (2005) respectively employed Functional Principal Component Analysis (FPCA) and Hidden Markov Model (HMM) to forecast stock price trend based on non-stationary nature of the stochastic processes which generate the same financial prices. Zhang and Zhang (2009) also developed a stochastic stock price forecasting model using Markov chains.

Varied studies (Xi et al.2012; Bulla et al.2010; Ammann and Verhofen2006; and Duffie and Singleton1993) have researched into the application of stochastic probability to portfolio allocation. Building on existing literature, we assume that stock price fluctuations exhibit Markov’s dependency and time-homogeneity and we specify a three state Markov process (i.e. price decrease, no change and price increase) and advance the methodology for determining the mean return time for equity price increases and their respective limiting distributions using the generated state-transition matrices. We further replicate the case for a two-state space i.e. decrease in price and increase in price. Based on the methodology, we hypothesize that;

Equity with the highest state transition probability and least mean return time will remain the best choice for an investor.

We explore model performance using weekly historical data from the Ghana Stock Exchange (GSE); we set up the respective transition probability matrix for selected stocks to test the model efficiency and use.

Review of theoretical framework

Definition of the Markov process

The stochastic process {X (t), tϵT} is said to exhibit Markov dependence if for a finite (or countable infinite) set of points (t0t1, … , t n t), t0 < t1 < t2 < … < t n  < t where t, t r ϵT (r = 0, 1, 2, …, n).

P ( X t x X t n = x n , X t n - 1 = x n - 1 , , X t n = x 0 ) = P [ X t x X t n = x n ] = F X n , x ; t n , t
(1)

From the property given by equation (1), the following relation suffices

F X n , x ; t n , t = y S F y , x ; τ , t d F X n , y ; t n , τ
(2)

where t n  < τ < t and S is the state space of the process {X (t)}.

When the stochastic process has discrete state and parameter space, (2) takes the following form: for n > n1 > n2 > … > n k and nn r ϵT (r = 1, 2, …, k)

P ( X n = j X n 1 = i 1 , X n 2 = i 2 , , X n k = i k ) = P ( X n = j X n 1 = i 1 ) = P i j n k , n
(3)

A stochastic process with discrete state and parameter spaces which exhibits Markov dependency as in (3) is known as a Markov Process.

From the Markov property, for n k  < r < n we get

P i j n k , n = P ( X n = j X n k = i ) = m S P ( X n = j X r = m ) P ( X r = m X n k = i ) = m S P i j n k , r P m j r , n
(4)

equations (2) and (4) are known as the Chapman-Kolmogorov equations for the process.

n-step transition probability matrix and n-step transition probabilities

If P is the transition probability matrix of a Markov chain {X n , n = 0, 1, 2, …} with state space S, then the elements of Pn (P raised to the power n), P i j n i,jϵS are the n-step transition probabilities where P ij (n) is the probability that the process will be in state j at the nth step starting from state i.

The above statement can clearly be shown from the Chapman-Kolmogorov equation (4) as follows; for a given r and s, write

P i j s + r = k s P i k r P k j S

Set r = 1, s = 1 in the above equation to get

P i j 2 = k s P j k P k j

Clearly, P ij (2) is the (ij)th element for the matrix product P × P = P2. Now suppose P ij (r) (r = 3, 4, …, n) is the (ij)th of Pr then by the Kolmogorov equation, the

P i j r + 1 = k S P i k r P k j

which again can be seen as the (i, j)th element of the matrix product PrP = Pr+1. Hence by induction, P ij (n) is the (i, j)th element of Pnn = 2, 3, ….

To specify the model, the underlying assumption is stated about the identified n-step transition probability (stating without proof).

The transition probability matrix is accessible with existing state communication. Further, there exists recurrence and transience of states. States are also assumed to be irreducible and belong to one class with the same period which we take on the value 1. Thus the states are aperiodic.

Limiting distribution of a Markov chain

If P is the transition probability matrix of an aperiodic, irreducible, finite state Markov chain, then

lim t P t =π= α α α
(5)

Where α = [α1α2, …, α m ] with 0 < α j  < 1 and j = 1 m α j = 1 . See Bhat (1984). The chain with this property is said to be ergodic and has a limiting distribution π. The transition probability matrix P of such a chain is primitive.

Recurrence and transience of state

Let X t be a Markov Chain with state space S, then the probability of the first transition to state j at the tth step starting from state i is

f i j t =P X t = j , X r j ; r = 1 , 2 , 3 , , t - 1 X 0 = i
(6)

Thus the probability that the chain ever returns to state j is

f i j = t = 1 f i j t

and μ i j = t = 1 t f i j t is the expected value of first passage time. Further, if i = j, then;

f i i t =P X t = i , X r i ; r = 1 , 2 , 3 , , t - 1 X 0 = i
(7)

and μ i i = μ i = t = 1 t f i i t is the mean recurrence time of state i if state i is recurrent.

A state i is said to be recurrent (persistent) if and only if, starting from state i, eventual return to this state is certain. Thus state i is recurrent if and only if

f i i * = t = 1 f i i t = 1
(8)

A state i is said to be transient if and only if, starting from state i, there is a positive probability that the process may not eventually return to this state. This means f ii * < 1

Model specification

Defining the problem (Equity price changes as a three-state Markov process)

Let Y t be the equity price at time t where t = 0, 1, 2, …, n (t is measured in weekly time intervals). Further, we define d t  = Y t  - Yt-1 which measures the change in equity price at time t. Considering each closing week’s price as discrete time unit for which we define a random variable X t to indicate the state of equity closing price at time t, a vector spanned by 0, 1, 2

X t = 0 if d t < 0 decrease in equity price from t - 1 t o t 1 if d t = 0 no change in equity price from time t - 1 t o t 2 if d t > 0 increase in equity price from time t - 1 t o t

Next, we define an indicator vector

I i , t = 1 if X t = i 0 if X t i f o r i = 0 , 1 , 2 and t = 1 , 2 , , n
(9)

Then clearly for the outcome of X t we have

n i = t = 1 n I i , t f o r i = 0 , 1 , 2
(10)

wheren= i = 0 2 n i . Hence estimates of the probability that the equity price reduce, did not change and increased can be obtained respectively by

P ^ 0 = n 0 n , P ^ 1 = n 1 n and P ^ 2 = n 2 n
(11)

For the stochastic process X t obtained above for t = 1, 2, …, n we can obtained estimates of the transition probabilities P ij  = Pr (X t  = j|Xt-1 = i) for j = 0, 1, 2 by defining

δ t i , j = 1 if X t = i and X t + 1 = j 0 otherwise f o r t = 1 , 2 , , n - 1 and i , j = 0 , 1 , , k

where k + 1 is the number of states of the chain.

n i j = t = 1 n - 1 δ t i , j for i , j = 0 , 1 , 2 . Then P ^ i j = n i j n i for i , j = 0 , 1 , , k
(12a)

Therefore, an estimate for the transition matrix for k = 2 is

P ^ = P ^ 00 P ^ 01 P ^ 02 P ^ 10 P ^ 11 P ^ 12 P ^ 20 P ^ 21 P ^ 22
(12b)

Suppose the data in Additional file1 is uploaded as .csv, then R code for computing estimates in (12b) can be found in Additional file2 (three-state Markov Chain function column).

For a two-state Markov process

We maintain the above defined terms and set

X t = 0 if d t 0 no increase in equity price from t - 1 t o t 1 if d t > 0 increase in equity price from time t - 1 t o t

further set i, j = 0, 1, (for k = 1) and apply (9), (10), (11), (12a), and (12b) sequentially, we obtain

P ^ = P ^ 00 P ^ 01 P ^ 10 P ^ 11

without loss of generality, suppose X t has state space s = {0, 1} and transition probability matrix

P = 1 - θ θ β 1 - β , 0 < α , β < 1
(13)

Then, f00(1) = 1 - θ and for n ≥ 2, we have;

f 00 t = P [ X t = 0 ; X r 0 ; r = 1 , 2 , 3 , , t - 1 X 0 = 0 ] = P [ X t = 0 ; X r = 1 ; r = 1 , 2 , 3 , , t - 1 X 0 = 0 ]

By the Markov property and the definition of conditional probability, we have

f 00 n = P X t = 0 X t - 1 = 1 ] r = 2 t - 1 P X r = 1 X r - 1 = 1 P X 1 = 1 X 0 = 0 = β 1 - β t - 2 θ = θ β 1 - β t - 2 t 2
(14)

solving μ 0 = μ 00 = t = 1 t f 00 t to obtain the respective mean recurrence time. Thus,

μ 0 = μ 00 = t = 1 t f 00 t = 1 - θ + t = 2 t θ β 1 - β t - 2 = θ + β β
(15)

Similarly, we have

f 01 t = θ 1 - θ t - 1 , t 1 μ 01 = 1 f 10 t = β 1 - β t - 1 , t 1 μ 10 = 1 f 11 1 = 1 - β , f 11 t = θ β 1 - α t - 2 , t = 1 t 2 μ 11 = μ 1 = θ + β θ
(16)

With the corresponding R algorithm shown in Additional file2 (two-state Markov Chain function column).

Generating eigen vectors for computation of limiting distributions

After the transition probabilities are obtained for both two-state and three-state chains, the R codes in the lower portions of columns one and two in Additional file2 were used to generate the respective eigen vectors for computation of limiting distributions.

Findings and discussions

Data structure and summary statistics

Data used for this paper are weekly trading price changes for five randomly selected equities on the Ghana Stock Exchange (GSE), each covering period starting from January 2012-December 2013. We obtain the weekly price changes using the relation d t  = Y t  - Yt-1 where Y t represents the equity closing price on week t and Yt-1 is the opening price for the immediate past week. The equities selected include Aluworks (ALW), Cal Bank (CAL), Ecobank Ghana (EBG), Ecobank Transnational Incorporated (ETI), and Fan Milk Ghana Limited (FML).

In all, 104 (52 weeks) observational data points where obtained. Summary statistics on all respective equities on the GSE are shown in Table 1. We present summaries on the respective number of weekly price decreases, no change in price and price increase. Descriptive statistics for each equity weekly price change is also shown.

Overall, the frequency of "no price change" was more experienced over the study period. The lowest and highest price changes for the trading period are respectively -4.19 and 9.54. The estimated values of the kurtosis and skewness are also shown. Figure 1 presents a plot of the average weekly equity price changes of respective equities listed on the GSE over the study period in comparison to the standard deviation of weekly price changes.

Table 1 Summary statistics on the weekly trading price change over the study period
Figure 1
figure 1

A plot of mean and standard deviation of weekly price changes of equities. The plot indicates a very volatile weekly market price fluctuation for any market participating investor. This indicates high level of risk associated with equity purchase decision. We consider that the rational investor would basically seek to maximize purchasing decisions faced with this risk.

Empirical results on model application (three-state Markov chain)

For the five randomly selected equities, the transition probabilities of the equities are presented as follows. These were obtained from equation (12a) defining P i j = n i j n i w.r.t. the three-state space Markov process. A 3 × 3 transition matrix is obtained for respective equities as defined by (12b).

From the results of the algorithm, we select 5 equities with which we implement the hypothesis. They include;

A L W transition probability matrix P ^ = 0.133333 0.666667 0.200000 0.139241 0.759494 0.101266 0.166667 0.750000 0.083333 C A L transition probability matrix P ^ = 0.296296 0.407407 0.296296 0.261905 0.476190 0.261905 0.189189 0.324324 0.486486 E B G transition probability matrix P ^ = 0.433333 0.366667 0.200000 0.255319 0.553191 0.191489 0.137931 0.344828 0.517241 E T I transition probability matrix P ^ = 0.166667 0.611111 0.222222 0.131148 0.639344 0.229508 0.259259 0.444444 0.296296 F M L transition probability matrix P ^ = 0.380952 0.523810 0.095238 0.170732 0.487805 0.341463 0.136364 0.227273 0.636364

Clearly, P ^ i j >0 for all i, j = 0, 1, 2 indicating irreducibility of the chains for all equities. Hence state 0 for all the equities is aperiodic and since periodicity is a class property, the chains are aperiodic. These imply that the chains are ergodic and have limiting distributions.

Figure 2 presents the t - step transition probabilities for share price increases based on the assumption of time-homogeneity. This shows linear plot of transition probabilities for P22(t) for each selected stock as computed above. It measures the probability that a share at initial state (i. e. state 2) at inception transited to state 2 again after t weeks. Regarding the plot of the transition probabilities, the logical reasoning is to choose the equity which has the highest P22.

Figure 2
figure 2

t-step transition probabilities for share price increases.

From the plot, FML share is the best choice for the investor since the probability that it increases from a high price to another higher price is higher when compared to the other selected stocks. ALW recorded the least probability of transition within the period. Comparing CAL to EBG, the methodology shows that CAL shares maintain high probability of moving to higher prices as compared to EBG shares although the later started with high prices at inception.

Using equation (5), the limiting distributions of the respective equities were computed. These probabilities measure the proportions of times the equity states within a particular state in the long run. From Table 2, ALW equity has 14% chance of reducing and 11% chance of increasing in the long run. It however has 75% chance of no change in price. Similarly, in the long run, FML equity has 20% chance of reducing, 39% chance of experiencing no change in price and 42% chance of increasing in price. It is easily seen that for this instance, FML equity has the highest probability of price increase in the long run.

Table 2 Entries of the limiting distribution at for respective equities

Empirical model application (the two-state Markov process)

Defining a two-state space Markov process following from equation (13), we derive the state transition probabilities. The two-state transition probability matrix entries are indicated in Table 3 below;

Table 3 Entries of two-state transition matrices for selected equities

Applying equations (15) and (16) to the transition probabilities, we obtain the respective mean return time of the selected equities. These are shown in Table 4 below;

Table 4 Expected mean return time for respective stocks

Mean return time is measured in weeks with μ ij as defined in (15) and (16). The mean return time measures the expected time until the equity price’s next return to the state it was initially in at time 0. Figure 3 presents a plot of expected return time of the selected stocks at μ11. This determines the expected time until the next increase in share. We expect that the choice of share should not only have the highest transition probability, but should relatively possess a lower mean return time. Possessing the least mean return time for μ11 signifies the shortest return time to a price increase.

Figure 3
figure 3

Mean recurrence time of selected shares.

Conclusion

The Markov Process provides a credible approach for successfully analyzing and predicting time series data which reflect Markov dependency. The study finds that all states obtained communicate and are aperiodic and ergodic hence possessing limiting distributions. It is distinctive from Figures 1 and2 (expected return time and t-step state transition probabilities of equity price increases i.e. P ij transition from state 2 to state 2) that the investor gains good knowledge about the characteristics of the respective equities hence improving decision making in the light return maximization. With regards to the selected stocks, FML equity recorded the highest state transition probabilities, highest limiting distribution but the second lowest mean return time to price increases (i.e. 3.224 weeks).

Our suggested use of Markov chains as a tool for improving stock trading decisions indeed aids in improving investor knowledge and chances of higher returns given risk minimization through best choice decision. We showed that the proposed method of using Markov chains as a stochastic analysis method in equity price studies truly improves equity portfolio decisions with strong statistical foundation. In our future work, we shall explore the case of specifying an infinite state space for the Markov chains model in stock investment decision making.

References

  • Aguilera MA, Ocaña AF, Valderrama JM Applied stochastic models in business and industry. In Stochastic modeling for evolution of stock prices by means of functional principal component analysis. John Wiley & Sons, Ltd., New York; 1999.

    Google Scholar 

  • Ammann M, Verhofen M Working paper series in Finance. No. 20. The effect of market regimes on style allocation 2006.http://www.finance.unisg.ch

    Google Scholar 

  • Bhat UN Wiley series in Probability & Mathematical Statistics. Elements of applied stochastic processes 2nd edition. 1984.

    Google Scholar 

  • Bulla J, Mergner S, Bulla I, Sesboüé A, Chesneau C Munich Personal RePEc Archive. Markov-switching asset allocation: do profitable strategies exist? 2010. MPRA Paper no. 21154.http://mpra.ub.uni-muenchen.de/21154/

    Google Scholar 

  • Duffie D, Singleton JK: Simulated moments estimation of Markov models of asset prices. Econometrica 1993, 61(4):929-952. 10.2307/2951768

    Article  Google Scholar 

  • Hassan RM, Nath B Proceedings of the 2005 5th international conference on intelligent systems design and applications (ISDA’05). In Stock market forecasting using hidden Markov model: a new approach. IEEE Computer Society, Washington, DC, USA; 2005.

    Google Scholar 

  • Xi X, Mamon R, Davison M: A higher-order hidden Markov chain-modulated model for asset allocation. J Math Model Algorithm Oper Res 2012, 13(1):59-85.

    Article  Google Scholar 

  • Zhang D, Zhang X: Study on forecasting the stock market trend based on stochastic analysis method. Int J Bus Manage 2009, 4(6):163-170.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enoch Nii Boi Quaye.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

FOM introduced the idea, undertook the theoretical and methodology development. ENBQ developed the codes and helped in the analysis and typesetting of mathematical equations. RAL also helped in the analysis and the general typesetting of the manuscript. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mettle, F.O., Quaye, E.N.B. & Laryea, R.A. A methodology for stochastic analysis of share prices as Markov chains with finite states. SpringerPlus 3, 657 (2014). https://doi.org/10.1186/2193-1801-3-657

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2193-1801-3-657

Keywords