首页 > 其他分享 >Continuous-Time Markov Chain

Continuous-Time Markov Chain

时间:2023-12-10 11:56:23浏览次数:34  
标签:Chain align Continuous mu state ij time Markov lambda

1. Definitions

Definition 1. We say the process \(\{X(t),t\ge0\}\) is a continuous-time Markov chain if for all \(s,t\ge0\) and nonnegative integers \(i,j,x(u),0\le u<s\)

\[\begin{align} P\{X(t+s)&=j\ |\ X(s)=i,X(u)=x(u),0\le u<s \}\\ &= P\{X(t+s)=j|X(s)=i \} \end{align} \]

If, in addition,

\[P\{X(t+s)=j|X(s)=i \} \]

is independent of \(s\), the process is said to have statinonary or homogeneous transition probabilities.

​ The amount of time the process spends in a state \(i\), from the time it enters state \(i\) to the time it transite into a different state, is exponentially distributed with parameter \(v_i\).

2. Birth and Death Process

Definition 2. A birth and death process is a continuous-time Markov chain with states \(\{0,1,...\}\) for which transitions from state \(n\) may go only to either state \(n-1\) or state \(n+1\).

​ Let's say the transition probabilities are

\[P_{i,i+1}=\frac{\lambda_i}{\lambda_i +\mu_i}\\ P_{i,i-1}=\frac{\mu_i}{\lambda_i+\mu_i} \]

The next state will be \(i+1\) if a birth occurs before a death, and the probability that an exponential random variable with rate \(\lambda_i\) wil occur earlier than an independent exponential random variable with rate \(\mu_i\) is \(\lambda_i/(\lambda_i+\mu_i)\).


Example 1. (A Linear Growth Model with Immigration) A model in which

\[\begin{align} &\mu_n =n\mu, &&n\ge1\\ &\lambda_n = n\lambda+\theta,&&n\ge0 \end{align} \]

is called a linear growth model with immigration, let \(X(t)\) denote the population size at time \(t\).

​ Suppose \(X(0)=i\), let \(M(t)=E[X(t)]\), we can determine \(M(t)\) by deriving and solving a differential equation. Given \(X(t)\),

\[\begin{align} M(t+h) &= E[X(t+h)]\\ &= E[E[X(t+h)|X(t)]] \end{align} \]

Consider \(h\) is a sufficiently small period of time, by the properties of continuous Markov chain, we have

\[\begin{align} &P\{X(t+h) = X(t)+1 |X(t) \} = (X(t)\lambda+\theta)h+o(h)\\ &P\{X(t+h) = X(t)-1 |X(t) \} = X(t)\mu h+o(h)\\ &P\{X(t+h) = X(t) |X(t) \} = 1-(X(t)(\lambda+\mu)+\theta)h+o(h)\\ \end{align} \]

Therefore,

\[E[X(t+h)|X(t)] = X(t) + (\theta+X(t)\lambda - X(t)\mu)h+o(h)\\ M(t+h)=E[E[X(t+h)|X(t)]] = M(t) + (\lambda-\mu)M(t)h+\theta h+o(h) \]

or, equivalently,

\[\frac{M(t+h)-M(t)}h = (\lambda-\mu)M(t) + \theta +\frac{o(h)}h \]

so we get the derivation of \(M(t)\):

\[M'(t) = (\lambda-\mu)M(t) +\theta \]

By solving the differential equation we have

\[M(t) = \frac\theta{\lambda-\mu}\left[e^{(\lambda-\mu)t}-1 \right]+ie^{(\lambda-\mu)t} \]

Note that we have implicitly assumed that \(\lambda\ne\mu\).


Example 2. (A M/M/s Queueing Model) Suppose a service station with \(s\) servers, the times between successive arrivals of customers are independent exponential random variables having mean \(1/\lambda\). Upon arrival, the customer joins the queue if there's no available server. The service time of each customer is assumed to be independent exponential random variables having mean \(1/\mu\).

​ This is actually a birth and death process with parameters

\[\mu_n =\left\{\begin{align}&n\mu,&&1\le n\le s\\&s\mu,&&n>s \end{align} \right.\\ \lambda_n = \lambda, n\ge0 \]

Let \(T_i\) denote the time, starting from state \(i\), it takes for the process to enter state \(i+1,i\ge0\). Obviously, \(E[T_0] = 1/\lambda\). Let

\[I_i =\left\{\begin{align}&1,&&\mathrm{if\ the\ first\ transition\ from\ }i\mathrm{\ is\ to}\ i+1\\ &0,,&&\mathrm{if\ the\ first\ transition\ from\ }i\mathrm{\ is\ to}\ i-1 \end{align} \right. \]

and note that

\[\begin{align} &E[T_i|I_i=1] = \frac1{\lambda_i+\mu_i}\\ &E[T_i|I_i=0] = \frac1{\lambda_i+\mu_i} +E[T_{i-1}] + E[T_i] \end{align} \]

That is, if the first transition from \(i\) is to \(i+1\), then no additional time is needed, the time it occurs is exponential with rate \(\lambda_i+\mu_i\). If the first transition from \(i\) is to \(i-1\), the time also has expectation of \(1/(\lambda_i+\mu_i)\), but it takes additional time to go back to \(i\), that is \(E[T_{i-1}]\), and additional time to go to \(i+1\), which is \(E[X_{i+1}]\).

​ Hence, since the probability that the first transition is a birth is \(\lambda_i/(\lambda_i+\mu_i)\), we see that

\[E[T_i] =\frac1{\lambda_i+\mu_i}+\frac{\mu_i}{\lambda_i+\mu_i}(E[T_{i-1}]+E[T_i]) \]

or, equivalently,

\[E[T_i] = \frac1{\lambda_i} +\frac{\mu_i}{\lambda_i}E[T_{i-1}],i\ge1 \]

Starting with \(E[T_0] =1/\lambda\), we can recursively calculate all \(E[T_i]\).


Example 3. For the birth and death process with parameters \(\lambda_i\equiv\lambda,\mu_i\equiv\mu\).

​ Using the conclusion from Example 2, we have

\[E[T_0] = \frac1\lambda\\ E[T_i] = \frac1\lambda+\frac\mu\lambda E[T_{i-1}] \]

So

\[\begin{align} E[T_i] = \frac{1-(\mu/\lambda)^{i+1}}{\lambda-\mu} \end{align} \]


3. The Transition Probability Function \(P_{ij}(t)\)

​ Let

\[P_{ij}(t) = P\{X(t+s)=j|X(s)=i \} \]

denote the probability that a process presently in state \(i\) will be in state \(j\) in a time \(t\) later.

​ Let

\[q_{ij} = v_iP_{ij} \]

where \(v_i\) is rate at which the process makes a transition when in state \(i\). We have

\[\begin{align} &\lim_{h\rightarrow0}\frac{1-P_{ii}(h)}h=v_i\\ &\lim_{h\rightarrow0}\frac{P_{ij}(h)}h=q_{ij},\quad \mathrm{when}\ i\ne j \end{align} \]

Chapman-Kolmogorov Equation For all \(s\ge0,t\ge0\),

\[P_{ij}(t+s) = \sum_{k=0}^\infin P_{ik}(t)P_{kj}(s) \]

From the equation, we obtain

\[P_{ij}(h+t)-P_{ij}(t) = \sum_{k\ne i}P_{ik}(h)P_{kj}(t) -P_{ij}(t)(1-P_{ii}(h)) \]

and thus

\[\lim_{h\rarr 0}\frac{P_{ij}(h+t)-P_{ij}(t)}h=\sum_{k\ne i}q_{ik}P_{kj}(t) - v_iP_{ij}(t) \]

Hence, we have the following theorem.

Kolmogorv's Backward Equations For all states \(i,j\), and times \(t\ge0\)

\[P_{ij}'(t)=\sum_{k\ne i}q_{ik}P_{kj}(t) - v_iP_{ij}(t) \]


Example 4. (A Continuous-Time Markov Chain Consisting of Two States) Consider a machine can work an exponential amount of time having mean \(1/\lambda\) before breaking down; and suppose it takes an exponential amount of time having mean \(1/\mu\) to repair it. If the machine is working at time 0, what is the possibility that it will be working at time \(t\)?

​ Using the Kolmogorov's backward equations, we have

\[\begin{align} P_{00}'(t) = \lambda P_{10}(t) - \lambda P_{00}(t)\\ P_{10}'(t) = \mu P_{00}(t) - \mu P_{10}(t) \end{align}\tag{3.1} \]

Obviously, the above two equations are equivalent to

\[\mu P'_{00}(t) + \lambda P'_{10}(t) = 0 \]

Integrating on both sides, we have

\[\mu P_{00}(t) +\lambda P_{10}(t) = c \]

As \(P_{00}(0)=1,P_{10}(0)=0\), then \(c = \mu\). By replacing \(P_{10}(t)\) with the first equation in (3.1), we obtain a differential eqaution

\[P'_{00}(t) + (\lambda+\mu)P_{00}(t) = \mu \]

By solving the differential equation with condition \(P_{00}(0)=1\), we have

\[P_{00}(t) = \frac\lambda{\lambda+\mu}e^{-(\lambda+\mu)t}+\frac\mu{\lambda+\mu} \]


​ Using the Chapman-Kolmogorov equations, we have

\[\begin{align} P_{ij}(t+h) -P_{ij}(t) &= \sum_{k = 0}^\infin P_{ik}(t)P_{kj}(h)-P_{ij}(t)\\ &= \sum_{k\ne j}P_{ik}(t)P_{kj}(h) - [1-P_{jj}(h)]P_{ij}(t) \end{align} \]

and thus we obtain the

Kolmogorov's Forward Equations For all states \(i,j\), and times \(t\ge0\)

\[P'_{ij}(t) = \lim_{h\rarr 0}\frac{P_{ij}(t+h)-P_{ij}(t)}h = \sum_{k\ne j}q_{kj}P_{ik}(t) - v_jP_{ij}(t) \]

Proposition 1. For a pure birth process

\[\begin{align} &P_{ii}(t) = e^{-\lambda_i t},&&i\ge0\\ &P_{ij}(t) = \lambda_{j-1}e^{-\lambda_jt}\int_0^te^{\lambda_js}P_{i,j-1}(s)\ \mathrm ds,&&j\ge i+1 \end{align} \]

Proof For a pure birth process, using the Kolmogorov's forward equation we have

\[\begin{align} &P'_{ii}(t) = -\lambda_iP_{ii}(t),&&i\ge0 \\ &P'_{ij}(t)=\lambda_{j-1}P_{i,j-1}(t)-\lambda_iP_{ij}(t),&&j\ge i+1 \end{align} \]

The first equation is quite obvious, for the second equation we have

\[P'_{ij}(t) + \lambda_jP_{ij}(t) = \lambda_{j-1}P_{i,j-1}(t) \]

or, equivalently,

\[\frac{\mathrm{d}}{\mathrm{d}t}\left[e^{\lambda_j t}P_{ij}(t) \right] = \lambda_{j-1}e^{\lambda_jt}P_{i,j-1}(t) \]

Hence, since \(P_{ij}(0)=0\), we obtain the desired results.

4. Limiting Probabilities

​ The probability that a continuous-time Markov chain will be in state \(j\) at time \(t\) often converges to a limiting value that is independent of the initial state. That is, if we call this value \(P_j\), then

\[P_j\equiv\lim_{t\rarr\infin}P_{ij}(t) \]

where we are assuming that the limit exists and is independent of the initial state \(i\).

​ Consider the forward equaitons

\[P'_{ij}(t) = \sum_{k\ne j}q_{kj}P_{ik}(t)-v_jP_{ij}(t) \]

Now, if we let \(t\rarr\infin\), then assuming that we can interchange limit and summation, we obtain

\[\begin{align} \lim_{t\rarr\infin}P'_{ij}(t) &= \lim_{t\rarr\infin}\left[ \sum_{k\ne j}q_{kj}P_{ik}(t)-v_jP_{ij}(t) \right]\\ &= \sum_{k\ne j}q_{kj}P_k-v_jP_j \end{align} \]

As \(P_{ij}(t)\) is a bounded function, if \(P'_{ij}(t)\) converges, then it must converges to \(0\), hence we have

\[\begin{align} &v_jP_j =\sum_{k\ne j}q_{kj}P_k,\quad \mathrm{for\ all\ states\ }j\\ &\sum_jP_j=1 \end{align}\tag{4.1} \]

Thus we can solve the limiting probabilities.

Sufficient Conditions for the Existence of Limiting Probabilities:

  1. all states of the Markov chain communicate in the sense that starting in state \(i\) there is a positive probability of ever being in state \(j\), for all \(i,j\) and
  2. the Markov chain is positive recurrent in the sense that, starting in any state, the mean time to return to the state is finite.

​ If the above two conditions hold, then the limiting probabilities will exist and satisfy Equaitons (4.1). In addition, \(P_j\) also will have the interpretation of being the long-run proportion of time that the process is in state \(j\).

5. Time Reversibility

​ Suppose a continuous-time Markov chain with limiting probabilities existing, if we consider the sequence of states visited in the continuous-time Markov chain process, ignoring the amount of time spent in each state during a visit, then the sequence consitutes a discrete-time Markov chain with transition probabilities \(P_{ij}\). We call such a discrete-time Markov chain as the embedded chain. Its limiting probabilities exists and we have talked about them before

\[\begin{align} &\pi_j = \sum_j\pi_iP_{ij}\\ &\sum_i\pi_i=1 \end{align} \]

​ Since \(\pi_i\) represents the proportion of transitions that take the process into state \(i\), and because \(1/v_i\) is the mean time spent in state \(i\) during a visit, it seems intuitive that \(P_i\), the proportion of time in state \(i\), should be a weighted average of the \(\pi_i\) where \(\pi_i\) is weighted proportionately to \(1/v_i\). That is

\[P_i =\frac{\pi_i/v_i}{\sum_j\pi_j/v_j} \]

​ Suppose now a continuous-time Markov chain has been in operation of a long time, and suppose starting at some large time \(T\), we trace back the process. Suppose the process is in state \(i\) in some large time \(t\), the probability that the process has been in this state for an amount of time greater that \(s\) is \(e^{-v_is}\). That is,

\[P = \frac{P\{X(t-s)=i\}e^{-v_is}}{P\{X(t)=i\}}=e^{-v_is} \]

Thus, the sequence of states visited by the reversed process consititues a discrete-time Markov chain with transition probabilities \(Q_{ij}\) given by

\[Q_{ij}=\frac{\pi_jP_{ij}}{\pi_i} \]

Therefore, a continuous-time Markov chain will be time reversible, if the embedded chain is time reversible. That is, if

\[\pi_iP_{ij}=\pi_jP_{ji} \]

Using the fact that \(P_i=\displaystyle{\frac{\pi_i/v_i}{\sum_j\pi_jv_j}}\), we see that the preceding is equivalent to

\[P_iq_{ij} = P_jq_{ji},\quad \mathrm{for\ all\ }i,j \]

Since \(P_i\) is the proportion of time in state \(i\) and \(q_{ij}\) is the rate when in state \(i\) that goes to state \(j\), the condition of time reversibility is that the rate at which the process goes to directly from state \(i\) to state \(j\) is equal to the rate at which it goes directly from \(j\) to \(i\).

Proposition 5.1 If for some set \(\{P_i\}\)

\[\sum_iP_i=1 \]

and

\[P_iq_{ij}=P_jq_{ji},\quad\mathrm{for\ all\ }i\ne j \]

then the continuous-time Markov chain is time reversible and \(P_i\) represents the limiting probability of being in state \(i\).

Proposition 5.2 A time reversible chain with limiting probabilities \(P_j,j\in S\) that is truncated to the set \(A\subset S\) and remains irreducible is also time reversible and has limiting probabilities \(P_j^A\) given by

\[P_j^A=\frac{P_j}{\sum_{i\subset A}P_i},\quad j\in A \]

Proposition 5.3 If \(\{X_i(t),t\ge0\}\) are independent time reversible continuous-time Markov chains, then the vector process \(\{(X_1(t),...,X_n(t) ,t\ge0\}\) is also a time-reversible continuous-time Markov chain.

6. Uniformization

​ Consider a continuous-time Markov chain in which the mean time spent in a state is the same for all states. That is, suppose that \(v_i=v\), for all state \(i\). Let \(N(t)\) denote the number of transitions by time \(t\), then \(\{N(t),t\ge0\}\) will be a Poisson process with rate \(v\).

​ To compute the transition probabilities \(P_{ij}(t)\), we can condition on \(N(t)\):

\[\begin{align} P_{ij}(t) &= P\{X(t)=j|X(0)=i \}\\ &= \sum_{n=0}^\infin P\{X(t)=j|X(0)=i,N(t)=n \}P\{N(t)=n|X(t)=j \} \\ &= \sum_{n=0}^\infin P\{X(t)=j|X(0)=i,N(t)=n \}e^{-vt}\frac{(vt)^n}{n!} \end{align} \]

Since the distribution of time spent in each state is the same, given that \(P\{N(t)=n\}\) gives us no information about which states were visited. Hence,

\[P\{X(t)=j|X(0)=i,N(t)=n \}=P_{ij}^n \]

where \(P_{ij}^n\) is the \(n\)-stage transition probabilities associted with the discrete-time Markov chain with transition probabilities \(P_{ij}\); and so when \(v_i\equiv v\)

\[P_{ij}(t)=\sum_{n=0}^\infin P_{ij}^n e^{-vt}\frac{(vt)^n}{n!} \]

​ The assumption \(v_i\equiv v\) is quite limited in practice, but suppose we make a trick of allowing fictitious transitions from a state to itself, then most Markov chains can be put in that form. Let \(v\) satisfying

\[v_i\le v,\quad \mathrm{for\ all\ }i \]

We can say any Markov chain satisfying the above condition can be thought of as being a process that spends an exponential amount of time with rate \(v\) in state \(i\) and then makes a transition to \(j\) with transition probability \(P^*_{ij}\), where

\[P^*_{ij}=\left\{ \begin{align} &\frac{v_i}vP_{ij},&&j\ne i\\ &1-\frac{v_i}v,&&j=i \end{align} \right. \]

Hence we can have the transition probabilities computed by

\[P_{ij}(t)=\sum_{n=0}^\infin P_{ij}^{*n} e^{-vt}\frac{(vt)^n}{n!} \]

7. Computing the Transiition Probabilities

​ For any pair of states \(i\) and \(j\), let

\[r_{ij} = \left\{\begin{align}&q_{ij},&&i\ne j\\&-v_i,&&i=j \end{align} \right. \]

So we can rewrite the Kolmogorov's forward and backward equations as follows:

\[\begin{align} &P'_{ij}(t) = \sum_kr_{ik}P_{kj}(t)&&(\mathrm{backward})\\ &P'_{ij}(t) = \sum_kr_{kj}P_{ik}(t)&&(\mathrm{forward}) \end{align} \]

This representation is especially revealing when we introduce matrix notation. Define the matrices \(\bold{R}\) and \(\bold{P}(t), \bold{P}(t)\) by letting the elements in row \(i\), column \(j\) of these matrices be, respectively, \(r_{ij}, P_{ij}(t)\) and \(P'_{ij}(t)\). Thus

\[\begin{align} &\bold P'(t) = \bold R\bold P(t),&&\mathrm{(backward)}\\ &\bold P'(t) = \bold P(t)\bold R,&&\mathrm{(forward)} \end{align} \]

As the solution of the scalar differential equation

\[f(t) = f(0)e^{ct} \]

So the forward eqaution can be written as

\[\bold P(t) = \bold P(0)e^{\bold Rt} \]

Since \(\bold P(0)=\bold I\), this yields that

\[P(t) = e^{\bold Rt} \]

where the matrix \(e^{\bold Rt}\) is defined by

\[e^{\bold Rt} = \sum_{n=0}^\infin\bold R^n\frac{t^n}{n!} \]

标签:Chain,align,Continuous,mu,state,ij,time,Markov,lambda
From: https://www.cnblogs.com/kaleidopink/p/17892338.html

相关文章

  • Langchain-Chatchat+Qwen实现本地知识库
    1.基础介绍Langchain-Chatchat一种利用 langchain 思想实现的基于本地知识库的问答应用,目标期望建立一套对中文场景与开源模型支持友好、可离线运行的知识库问答解决方案。大致过程包括加载文件->读取文本->文本分割->文本向量化->问句向量化->在文本向量中匹配出......
  • LangChain调用本地模型
    学习LangChain参考https://python.langchain.com.cn/docs/get_started/quickstart调用本地下载的模型参考https://blog.csdn.net/qq_43692950/article/details/131743987在JupyterNotebook中试验的代码(注意Jupyter不会释放GPU显存)fromlangchainimportPromptTemplate,LL......
  • (万字长文)手把手教你认识学会LangChain
    什么LangChainLangChain:一个让你的LLM变得更强大的开源框架LangChain六大主要领域管理和优化prompt。不同的任务使用不同prompt,如何去管理和优化这些prompt是langchain的主要功能之一。链,初步理解为一个具体任务中不同子任务之间的一个调用。数据增强的生成,数据增强生成涉及特定类......
  • 【法语阅读】Hayao Miyazaki commencera à travailler sur son prochain film lorsqu
    HayaoMiyazakicommenceraàtravaillersursonprochainfilmlorsqueLegarçonetlehéronneseraplusàl’affichedanslessallesdecinémaHayaoMiyazakiwillstartworkingonhisnextfilmwhen'TheBoyandtheHeron'isnolongershowingin......
  • 使用Langchain与ChatGLM实现本地知识库(二)
      大语言模型也只是将用户提供的大规模数据集训练而来,也并非万能的什么都知道,特别是一些小众知识、内部数据或私密的个人数据等,此时ChatGLM3肯定会胡乱回答就是ChatGPT4也不一定能给出满意回答;不少公司、个人都有自己的知识库或日志等此时如有可将这些数据以某种方式挂在大模型......
  • 【ToolChains】| CMake 技巧
    判断CMake编译环境编译类型CMAKE_BUILD_TYPE可取值为:Debug,Release,RelWithDebInfo,MinSizeRel等预设值if(CMAKE_BUILD_TYPEMATCHESDebug)#dosomethingendif()系统环境CMAKE_SYSTEM_NAME代表当前系统的类型,值有ANDROID,APPLE,IOS,UNIX,WIN32,WINC......
  • rust的musl toolchain
    rust项目常常会使用musl作为编译target,这个时候就会使用musl的工具链。musltoolchain安装在$HOME/.rustup/toolchain下面。通常可以用rustup安装,比如:rustupinstallstable-unknown-linux-musl也可以使用rust官方提供的脚本:curl--proto'=https'--tlsv1.2-sSfhttps://......
  • 马尔可夫Markov区制转移模型分析基金利率|附代码数据
    全文下载链接:http://tecdat.cn/?p=19611最近我们被客户要求撰写关于马尔可夫Markov区制转移模型的研究报告,包括一些图形和统计输出。过程会随着时间的推移而发展,结果会发生变化考虑一下经济衰退和扩张。在衰退开始时,产出和就业率下降并保持较低水平,然后,产出和就业率增加。从统......
  • 软件测试/人工智能|一文告诉你LangChain核心模块chains原理
    简介Chain是LangChain的核心模块之一,它将每个零散的逻辑串联成一整个业务流程,相当于是所有复杂逻辑的基础,由此可见chain的重要性非比寻常。本文就来给大家介绍一下Chain模块的原理。下面是chain的各种类型设计思路LangChain能火爆的主要原因之一就是Chain的设计非常巧妙,它......
  • langchain中的chat models介绍和使用
    简介之前我们介绍了LLM模式,这种模式是就是文本输入,然后文本输出。chatmodels是基于LLM模式的更加高级的模式。他的输入和输出是格式化的chatmessages。一起来看看如何在langchain中使用cahtmodels吧。chatmodels的使用首先langchain对chatmodels下支持的模型就少很多了。一方......