Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Markov chain
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Continuous-time Markov chain=== {{Main|Continuous-time Markov chain}} A continuous-time Markov chain (''X''<sub>''t''</sub>)<sub>''t'' β₯ 0</sub> is defined by a finite or countable state space ''S'', a [[transition rate matrix]] ''Q'' with dimensions equal to that of the state space and initial probability distribution defined on the state space. For ''i'' β ''j'', the elements ''q''<sub>''ij''</sub> are non-negative and describe the rate of the process transitions from state ''i'' to state ''j''. The elements ''q''<sub>''ii''</sub> are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process.<ref name="norris1">{{cite book|title=Markov Chains|year=1997|isbn=9780511810633|pages=60β107|chapter=Continuous-time Markov chains I|doi=10.1017/CBO9780511810633.004|last1=Norris|first1=J. R.|author-link1=James R. Norris}}</ref> ====Infinitesimal definition==== [[File:Intensities_vs_transition_probabilities.svg|thumb|The continuous time Markov chain is characterized by the transition rates, the derivatives with respect to time of the transition probabilities between states i and j.]] Let <math>X_t</math> be the random variable describing the state of the process at time ''t'', and assume the process is in a state ''i'' at time ''t''. Then, knowing <math>X_t = i</math>, <math>X_{t+h}=j</math> is independent of previous values <math>\left( X_s : s < t \right)</math>, and as ''h'' β 0 for all ''j'' and for all ''t'', <math display="block">\Pr(X(t+h) = j \mid X(t) = i) = \delta_{ij} + q_{ij}h + o(h),</math> where <math>\delta_{ij}</math> is the [[Kronecker delta]], using the [[little-o notation]]. The <math>q_{ij}</math> can be seen as measuring how quickly the transition from ''i'' to ''j'' happens. ====Jump chain/holding time definition==== Define a discrete-time Markov chain ''Y''<sub>''n''</sub> to describe the ''n''th jump of the process and variables ''S''<sub>1</sub>, ''S''<sub>2</sub>, ''S''<sub>3</sub>, ... to describe holding times in each of the states where ''S''<sub>''i''</sub> follows the [[exponential distribution]] with rate parameter β''q''<sub>''Y''<sub>''i''</sub>''Y''<sub>''i''</sub></sub>. ====Transition probability definition==== For any value ''n'' = 0, 1, 2, 3, ... and times indexed up to this value of ''n'': ''t''<sub>0</sub>, ''t''<sub>1</sub>, ''t''<sub>2</sub>, ... and all states recorded at these times ''i''<sub>0</sub>, ''i''<sub>1</sub>, ''i''<sub>2</sub>, ''i''<sub>3</sub>, ... it holds that :<math>\Pr(X_{t_{n+1}} = i_{n+1} \mid X_{t_0} = i_0 , X_{t_1} = i_1 , \ldots, X_{t_n} = i_n ) = p_{i_n i_{n+1}}( t_{n+1} - t_n)</math> where ''p''<sub>''ij''</sub> is the solution of the [[forward equation]] (a [[first-order differential equation]]) :<math>P'(t) = P(t) Q</math> with initial condition P(0) is the [[identity matrix]].
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Markov chain
(section)
Add topic