The shortest path problem in the stochastic networks with unstable topology

The stochastic shortest path length is defined as the arrival probability from a given source node to a given destination node in the stochastic networks. We consider the topological changes and their effects on the arrival probability in directed acyclic networks. There is a stable topology which shows the physical connections of nodes; however, the communication between nodes does not stable and that is defined as the unstable topology where arcs may be congested. A discrete time Markov chain with an absorbing state is established in the network according to the unstable topological changes. Then, the arrival probability to the destination node from the source node in the network is computed as the multi-step transition probability of the absorption in the final state of the established Markov chain. It is assumed to have some wait states, whenever there is a physical connection but it is not possible to communicate between nodes immediately. The proposed method is illustrated by different numerical examples, and the results can be used to anticipate the probable congestion along some critical arcs in the delay sensitive networks.

could be congested or uncongested. Wu et al. (2004) modeled a stochastic and timedependent network with discrete probability distributed arc weights. Peer and Sharma (2007) assumed two kinds of nodes, possible failure and always working. Ji (2005) solved three models of the shortest path by integrating stochastic simulation and genetic algorithm. The considered model in this paper is a directed acyclic stochastic network with known discrete distribution probabilities of leaving or waiting in nodes.
Our criterion to evaluation of the connections from the source node toward the destination node in the network is presented as the arrival probability, which is obtained by the established discrete time Markov chain (DTMC) in the network (Shirdel and Abdolhosseinzadeh 2016); then, the best possible connection is determined with the largest arrival probability. Liu (2010) converted his models into deterministic programming problems. Hutson and Shier (2009) and Rasteiro and Anjo (2004) obtained the maximum expected value of a utility function. Fan et al. (2005) applied a procedure for dynamic routing policies. Nie and Fan (2006) formulated the stochastic on-time arrival problem with dynamic programming, and Fan et al. (2005) minimized the expected travel time.
In this paper, the maximum arrival probability from a given source node to a given destination node is computed according to known discrete distribution probabilities of leaving or waiting in nodes, and a DTMC stochastic process is used to model the problem rather than dynamic programming or stochastic programming. Kulkarni (1986) developed a method based on a continuous time Markov chain (CTMC) to compute the distribution function of the shortest path length. Azaron and Modarres (2005) applied Kulkarni's method to queuing networks. Thomas and White (2007) modeled the problem of constructing a minimum expected total cost route as a Markov decision process. They wanted to respond to dissipated congestion over time according to some known probability distribution.
The arrival probability gives overall information of the network conditions to transmit flow from the source node toward the destination node. Two conditions at any state of the established DTMC are assumed: departing from the current state to a new state, or waiting in the current state with expecting better conditions. There are several unstable connections between nodes. The leaving distribution probability from one node toward another node is known as the probability that their connected arc is uncongested. A DTMC with an absorbing state is established and the transition matrix is obtained. Then, the arrival probability from the source node toward the destination node is computed as the multi-step transition probability from the initial state to the absorbing state in DTMC. The arrival probability introduced by Shirdel and Abdolhosseinzadeh (2016) is reviewed in this paper, and it is extended and the concepts and definitions are organized to find the stochastic shortest path. This paper is organized as follow. In "The unstable topology of the network" section some definitions and assumptions of networks with unstable topology is introduced. The concept of the stochastic process and the established DTMC in the network is described in "The established discrete time Markov chain" section; also, the computations of the arrival probability and the stochastic shortest path are presented in "The established discrete time Markov chain" section. "Numerical results" section contains some numerical results of implementation of the proposed method on some networks with various topologies.

The unstable topology of the network
In this section, we introduce some definitions and assumptions of networks with unstable topology. Let network G = (N , A), with node set N and arc set A, be a directed acyclic network. Then, we can label its nodes in a topological order such that for any (i, j) ∈ A, i < j (Ahuja et al. 1993). The physical topology for any (i, j) ∈ A shows the possibility of communication between nodes i, j ∈ N in the network. In the transportation networks there are some physical connections between nodes, but we cannot traverse anymore toward the destination node because of probable congestion. If there are some facilities in the network G, but it is not possible to use them continuously, then G has unstable topology. So, for any arc (i, j) ∈ A it is not mean there is a stable communication between nodes i, j ∈ N all the time (it could be probably congested). For any node i, it is supposed that the uniform distribution probabilities of leaving arcs (i, j) to be uncongested are known (Shirdel and Abdolhosseinzadeh 2016). Now, consider the situation that some arcs are congested and flow cannot leave because of the unstable topology. There are two kinds of wait situations: first, waiting in a particular node with expecting some facilities to release from the current condition, and it is called option 1; second, traversing some arcs those do not lead to visit a new node, and it is called option 2. For example, if it is decided to be in node 3 in the example network ( Fig. 1), arc (1, 3) does not cause to visit a new node whereas arc (3, 4) leads to the new node 4. The produced wait situations are more extended than queuing networks considered by Azaron and Modarres (2005) and Thomas and White (2007).
The stochastic variable of arc (i, j) according to the unstable topology is shown by x ij . If x ij = 1, it is possible to traverse arc (i, j), and otherwise x ij = 0. The probability that arc (i, j) to be uncongested is q ij = Pr[x ij = 1], and it represents the uniform probability that node i is leaved toward node j (an adjacency node). Then, the wait probability in node i, is q ii = 1 − {j:(i,j)∈A} q ij , and it is the probability that leaving arcs by node i are congested. Figure 1 shows the example network with its topological ordered nodes and it is the initial physical topology of the network. The numbers on arcs show the leaving probabilities q ij . Node 1 is the source node and node 4 is the destination node. It is not possible to traverse arc (2, 4) because it does not exist in the physical topology of the example Fig. 1 The example network with 4 nodes and 5 arcs network. However, the arcs in the physical topology could be congested according to the known distribution probabilities.

The established discrete time Markov chain
In this section, the proposed DTMC by Shirdel and Abdolhosseinzadeh (2016) is reviewed. The discrete time stochastic process {X r , r = 1, 2, 3, . . .} is called Markov chain (X r shows the process position), if it satisfies the following Markov property (see Ross 2006 andThomas andWhite 2007) Any state S l of the established DTMC determines the traversed nodes of the original network. For the example network ( Fig. 1) the created states S i , are shown in Table 1. The conditional probability of the next state depends on the current state and independent of the previous states. Let S = {S i , i = 1, 2, 3, . . .}, the initial state S 1 = {1} of DTMC contains the single source node and the absorbing state S |S| = {1, 2, . . . , |N |} contains all nodes of the network and it is not possible to depart; so, S is a finite state space (it is not possible to depart from S |S| ).
For the example network, the absorbing state S 5 = {1, 2, 3, 4} contains all nodes of the network; and the instance state S 4 of the state space S (Table 1) contains nodes {1, 2, 3} and all connected components of the example network, those are constructed by nodes 1, 2 and 3, as seen in Fig. 2.
The final state contains the destination node of the network, where DTMC does not progress anymore, and it is called assumption i. The states of the established DTMC contain the traversed nodes of the network, those are reached from some nodes in a previous state, and it is called assumption ii. It is not allowed to return from the last traversed node; however, it is possible to wait in the current state. Clearly, a new state is revealed if a leaving arc (i, j) ∈ A is traversed such that the current node i is contained in the current state and the new node j is contained in the new state, and it is called assumption iii. As previously said, the wait states are one of option 1 or option 2.
The state space diagram of the established DTMC for the example network is constructed as Fig. 3; the values on arcs show the wait and the transition probabilities. Table 1 The state space of the established DTMC for the example network

The transition and the wait probabilities
The transition probabilities p kl satisfy the following conditions The transition probabilities are elements of matrix P |S|×|S| , where p kl is kth row and lth column of matrix P, and it is called the transition matrix or Markov matrix (Ibe 2009).
The following theorems are used to obtain the transition matrix of the established DTMC in the network by Shirdel and Abdolhosseinzadeh (2016). The transition probabilities (except the absorbing state) are obtained by Theorem 1.
Theorem 1 If p kl is klth element of matrix P, that k � = l, l < |S| and S k = {v 0 = 1, v 1 , . . . , v m } is the current state, then the transition probability from state S k to state S l is computed as follow Proof Since, it is not allowed to traverse from one state to the previous states (assumption ii), then necessarily p kl = 0, for l < k. Otherwise, suppose l > k, during transition from the current state S k to the new state S l , it should be reached just one node other than the nodes of the current state, so |S l \S k | = 1, v ∈ S k , and w ∈ S l \S k are held by assumption ii and iii. Two components of p kl formula should be computed.
In the last node v m of the current state S k , it is possible to wait in v m with probability q v m v m . Notice, it is not possible to wait in the other nodes v ∈ S k \{v m } because it should be leaved to construct the current state, however it is not necessary for node v m with the largest label (leaving v m leads to a new node, and therefore results in a new state). If w ∈ S l \S k , then one or all of events E vw (i.e. to traverse a connecting arc between a node of the current state and another node of the new state) can happen for (v, w) ∈ �, and the arrival probability of node w ∈ S l from the current state S k is equal to The collection probability should be computed because of deferent representations of the new state (for example see Fig. 2). Then, the nodes of the current state v ∈ S k \{v m } (while waiting in v m ) should be prevented from reaching other nodes u / ∈ S k and u � = w (assumption iii), so arcs (v, u) are not allowed to traverse and they are excluded simultaneously, thus it is equal to (v,w) ). The other possibility in node v m , that is leaving it toward the new node w ∈ S l \S k with probability q v m w .
Theorem 2 describes the transition probabilities to the absorbing state S |S| , and they are the last column of the transition matrix P.
Theorem 2 To compute the transition probability from state S k = {v 0 = 1, v 1 , . . . , v m } to the absorbing state S |S| for k = 1, 2, . . . , |S| − 1, which is k|S|th element of matrix P, suppose v n ∈ S |S| is the given destination node of the network then E vv n denotes the event that arc (v, v n ) ∈ N of the network is traversed during the transition from S k to S |S| .
Proof To compute the transition probabilities p k|S| , for k = 1, 2, . . . , |S| − 1 it should be noticed the final state is the absorbing state S |S| = {1, 2, 3, . . . , |N |} containing all nodes of the network, and the stochastic process does not progress any more (assumption i). So, it is sufficient to consider leaving arcs (v, v n ) from v ∈ S k , the nodes of the current state, toward the destination node v n ∈ S |S| . Then, one or all of events E vv n (i.e. to traverse a connecting arc between a node of the current state and the destination node of the Fig. 4 The constructed states during transition from S 2 to S 4 absorbing state) can happen and the transition probability from the current state S k to the absorbing state S |S| is totally equal to Pr[ v∈S k ,(v,v n )∈A E vv n ]. The collection probability should be computed because of different representations of the states (for example see Fig. 2).
For state S 4 , transition probability p 45 is obtained by P(E 14 ∪ E 34 ∪ E 24 ), however q 24 = 0 as seen in Fig. 1, then p 45 = q 14 + q 34 − q 14 × q 34 . The wait probabilities, those are the diagonal elements of the transition matrix P, are obtained by Theorem 3.
Theorem 3 Suppose S k = {v 0 = 1, v 1 , . . . , v m } is the current state, then the wait probability p kk is kkth element of matrix P and it is Proof The wait probabilities p kk are the complement probabilities of the transition probabilities from the current state S k , for k = 1, 2, . . . , |S| − 1, toward the all departure states S j , for j = k + 1, k + 2, . . . , |S|. Then, we have p kk = 1 − |S| j=k+1 p kj , for k = 1, 2, . . . , |S| − 1, in other word, they are the diagonal elements of matrix P, those are computed for any row k = 1, 2, . . . , |S| − 1 of the transition matrix (see Ibe 2009). The absorbing state S |S| does not have any departure state, so p |S||S| = 1 as the transition matrix P.

The arrival probability
The arrival probability determines the overall reliability of connections in the network, and it shows the probability that they are not congested during the transmission of flow from the source node to the destination node in the network. The arrival probability is defined as multi-step transition probability from the initial state S 1 to the absorbing state S |S| in the established DTMC. According to the assumptions i, ii and iii, the state space of DTMC is directed and acyclic (otherwise return to the previous states is allowed contradictively). Out-degree of any state is at least one (without loop wait transition arcs consideration), except the absorbing state S |S| , then for any state S k , there is one\multi-step transition from the initial state to the absorbing state that traverses state S k . Consequently, the absorbing state is accessible from the initial state after some finite transitions. Let p kl (r) = Pr[X m+r = S l |X m = S k ] denote the conditional probability that the process will be in state S l after exactly r transitions, given that it is presently in state S k . So, if matrix P(r) is the transition matrix after exactly r transitions, it can be shown that P(r) = P r , and let p kl (r) be klth element in matrix P r (see Ibe 2009). Thus, the arrival probability after exactly r transitions is p 1|S| (r) = Pr[X r = S |S| |X 0 = S 1 ] and it is the 1|S|th element in the matrix P r .
For the example network, we want to obtain the probability of the arrival node 4 from node 1. The arrival probability p 15 (r) is obtained as shown in Fig. 5 after six transitions. For r sufficiently large, the probabilistic behavior of DTMC becomes independent of the starting state i.e. Pr[X r = S |S| |X 0 = S 1 ] = Pr[X r = S |S| ], that is the multi-step transition probability (Ibe 2009).

The stochastic shortest path
Now, we extended Shirdel and Abdolhosseinzadeh (2016) method to compute the arrival probability for a specific path, it should be considered as the probable shortest path. So, it is enough to put some conditions on the leaving probabilities q ij , those enforce the nodes of the considered shortest path to be reached sooner than the other nodes in the network. Thus, the stochastic shortest path is determined as the path which has the largest arrival probability. For path with node set N and arc set A , following changes in the network imply that path is the stochastic shortest path; for all i ∈ N a. if j / ∈ N � and (j, i) ∈ A � then q ji := 0 and q jj := q jj + q ji b. if j ∈ N and (i, j) / ∈ A � then q ji := 0.

Numerical results
Some implementations of the proposed method on the networks with different topologies are presented in this section. The instances are directed acyclic networks and there is a path from each node to the destination node. The leaving probabilities of nodes are random numbers produced by the uniform distribution probability. Then, the arrival probability is computed for the established DTMC. All of the experiments are coded in MATLAB R2008a and they are performed on Dell Latitude E5500 (Intel(R) Core(TM) 2 Duo CPU 2.53 GHz, 1 GB memory). To avoid vague demonstration just the stochastic shortest path with the arrival probability computation results are shown by square and circle markers in the figures, respectively; whereas, dashed lines are the results for other paths.
We use two propositions inductively to be sure there will be a path from the source node to the destination node in its initial topology, and the created network is an acyclic network.

Proposition 1
If node k is the first node with larger index than source node 1 andin-degree(k) = 0, let 1 ≤ l < k is an arbitrary node, then by adding arc (l, k) there exists a path from source node 1 to node k.
Proposition 2 If node k is the first node with smaller index than destination node n and out-degree(k) = 0, let k < l ≤ n is an arbitrary node, then by adding arc (k, l)there exists a path from node k to destination noden.
Network 1 has an arbitrary topology with 8 nodes and 18 arcs and the leaving probabilities of arcs are shown in Table 2. For the established DTMC on network 1, the size of the state space is 47. The absorbing state containing the destination node is accessible by at least two transitions.
As shown in Fig. 6, path 4: 1 → 6 → 8 is the stochastic shortest path of network 1 with arrival probability 0.6523 among 27 possible paths.
Network 2 and network 3 are grid networks and the leaving probabilities of their arcs are shown in Table 3. The size of the state space for the established DTMC on network 2 is 76 and for network 3 is 49.
The destination node of network 2 is accessible after at least four transitions, and it is done for network 3 after at least three transitions.
As shown in Fig. 7, path 11: 1 → 2 → 4 → 8 → 9 is the stochastic shortest path of network 2 with arrival probability 0.6535 among 33 paths. For network 3, path 3: 1 → 2 → 5 → 6 → 9 is the stochastic shortest path with arrival probability 0.3996 among 6 paths (see Fig. 8). Network 4 is a complete graph with 9 nodes and 36 arcs and the leaving probabilities are shown in Table 4. The size of the state space for the established DTMC on network 4 is 129.
The obtained arrival probabilities of network 4 are shown in Fig. 9, and path 16: 1 → 3 → 5 → 9 is the stochastic shortest path with arrival probability 0.4882 among 128 possible paths.  The obtained arrival probability in a network determines the general situation of the network to transmit flow from a source node toward a destination node (Shirdel and Abdolhosseinzadeh 2016); however, the presented method precisely determines the path with the largest probability amongst all paths.

Conclusions
The arrival probability from a given source node to a given destination node was computed according to the probability of transition from the initial state to the absorbing state by multi-step transition probability of the established discrete time Markov chain in the original network. The proposed method to obtain the arrival probability determines that the destination node is accessible for the first time. The stochastic shortest path was separately determined which has the largest arrival probability value. So, this method can be applied to rank paths of a network by considering their obtained arrival probabilities. Also, the proposed method evaluates the reliability of connections in the networks. So, it can be used in the shortest path problem with recourse, where locally should be decided which path is selected to traverse. The discrete nature of the proposed model could apply meta-heuristic methods to reduce the computations. Also, the proposed method can be used for the stochastic problems as a policy evaluation index.  . 9 The arrival probability of network 4