<< Chapter < Page | Chapter >> Page > |
The rest of proof is structured as follows. First, we show that the sequence of stable state distributions for the Markov chains (MC) used by the MCMC algorithm converges to a uniform distribution over the set of sequences that minimize the energy function as the iteration count increases. Then, we show using Theorem 11 and Theorem 12 that the non-homogeneous MC used in the MCMC algorithm is strongly ergodic, which by the definition of strong ergodicity implies that MCMC always converges to the stable distribution found above. This implies that the outcome of the MCMC algorithm converges to a minimum-energy solution as , completing the proof of Theorem Theorem 10 .
We therefore begin by finding the stable state distribution for the non-homogeneous MC used by the MCMC algorithm. At each super-iteration , the distribution defined as
satisfies . We can show that the distribution converges to a uniform distribution over the set of sequences that minimize the energy function, i.e.,
where . To show [link] , we will show that is increasing for and eventually decreasing for . Since for and , , and so for we have
which together with [link] implies . On the other hand, if , then
For sufficiently small , the right hand side (more precisely, the second and third lines) of [link] is dominated by the second term (third line), which increases when decreases, and therefore decreases for as increases. Finally, since all sequences have the same energy , it follows that the distribution is uniform over the symbols in .
Having shown convergence of the non-homogenous Markov chain's stable state distributions, we now show that the non-homogeneous MC is strongly ergodic. The transition matrix of the MC at iteration depends on the temperature used within MCMC algorithm. We first show that the MC used in the MCMC algorithm is weakly ergodic via Theorem 11 ; the proof of the following Lemma is given at the end of this section.
Lemma 2 The ergodic coefficient of for any is upper bounded by
where is defined in [link] .
Let be two arbitrary sequences in . The probability of transitioning from a given state to a neighboring state in an iteration within iteration of super iteration of the MCMC algorithm is given by [link] , and can be rewritten as
where . Therefore the smallest probability of transition from to within super-iteration is bounded by
Using the alternative definition of the ergodic coefficient given in [link] ,
Using Lemma 2 , we can evaluate the sum given in Theorem 11 as
and therefore the non-homogeneous MC defined by is weakly ergodic. Now we can use Theorem 12 to show that the MC is strongly ergodic by proving that
Since we know from earlier in the proof that is increasing for and eventually decreasing for , there exists a such that for any ,
Since the right hand side does not depend on , then we have that
This implies that the non-homogeneous MC used by MCMC algorithm is strongly ergodic, and thus completes the proof of Theorem 10 .
Notification Switch
Would you like to follow the 'Universal algorithms in signal processing and communications' conversation and receive update notifications?