<< Chapter < Page Chapter >> Page >
X N = { X n : n N } , where N = { 0 , 1 , 2 , }

We view an observation of the system as a composite trial. Each ω yields a sequence of states { X 0 ( ω ) , X 1 ( ω ) , } which is referred to as a realization of the sequence, or a trajectory . We suppose the system is evolving in time. At discrete instants of time t 1 , t 2 , the system makes a transition from one state to the succeeding one (which may be the same).

Initial period: n = 0 , t [ 0 , t 1 ) , state is X 0 ( ω ) ; at t 1 the transition is to X 1 ( ω )
Period one: n = 1 , t [ t 1 , t 2 ) , state is X 1 ( ω ) ; at t 2 the transition is to X 2 ( ω )
.....
Period k : n = k , t [ t k , t k + 1 ) , state is X k ( ω ) ; at t k + 1 move to X k + 1 ( ω )
.....

The parameter n indicates the period t [ t n , t n + 1 ) . If the periods are of unit length, then t n = n . At t n + 1 , there is a transition from the state X n ( ω ) to the state X n + 1 ( ω ) for the next period. To simplify writing, we adopt the following convention:

U n = ( X 0 , X 1 , , X n ) E n U m , n = ( X m , , X n ) and U n = ( X n , X n + 1 , ) E n

The random vector U n is called the past at n of the sequence X N and U n is the future at n . In order to capture the notion that the system is without memory, so thatthe future is affected by the present, but not by how the present is reached, we utilize the notion of conditional independence, given a random vector, in the following

Definition . The sequence X N is Markov iff

(M) { X n + 1 , U n } ci | X n for all n 0

Several conditions equivalent to the Markov condition (M) may be obtained with the aid of properties of conditional independence. We note first that (M) is equivalent to

P ( X n + 1 = k | X n = j , U n - 1 Q ) = P ( X n + 1 = k | X n = j ) for each n 0 , j , k E , and Q E n - 1

The state in the next period is conditioned by the past only through the present state, and not by the manner in which the present state is reached.The statistics of the process are determined by the initial state probabilities and the transition probabilities

P ( X n + 1 = k | X n = j ) j , k E , n 0

The following examples exhibit a pattern which implies the Markov condition and which can be exploited to obtain the transitionprobabilities.

One-dimensional random walk

An object starts at a given initial position. At discrete instants t 1 , t 2 , the object moves a random distance along a line. The various moves are independent of each other. Let

  • Y 0 be the initial position
  • Y k be the amount the object moves at time t = t k { Y k : 1 k } iid
  • X n = k = 0 n Y k be the position after n moves.

We note that X n + 1 = g ( X n , Y n + 1 ) . Since the position after the transition at t n + 1 is affected by the past only by the value of the position X n and not by the sequence of positions which led to this position, it is reasonable to suppose that the process X N is Markov. We verify this below.

Got questions? Get instant answers now!

A class of branching processes

Each member of a population is able to reproduce. For simplicity, we suppose that at certain discrete instants the entire next generationis produced. Some mechanism limits each generation to a maximum population of M members. Let

  • Z i n be the number propagated by the i th member of the n th generation.
  • Z i n = 0 indicates death and no offspring, Z i n = k indicates a net of k propagated by the i th member (either death and k offspring or survival and k - 1 offspring).

The population in generation n + 1 is given by

X n + 1 = min M , i = 1 X n Z i n

We suppose the class { Z i n : 1 i M , 0 n } is iid. Let Y n + 1 = ( Z 1 n , Z 2 n , , Z M n ) . Then { Y n + 1 , U n } is independent. It seems reasonable to suppose the sequence X N is Markov.

Got questions? Get instant answers now!

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Applied probability. OpenStax CNX. Aug 31, 2009 Download for free at http://cnx.org/content/col10708/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Applied probability' conversation and receive update notifications?

Ask