Minimum power to maintain a nonequilibrium distribution of a Markov chain
DS Pavlichin, Y Quek, T Weissman - arXiv preprint arXiv:1907.01582, 2019 - arxiv.org
DS Pavlichin, Y Quek, T Weissman
arXiv preprint arXiv:1907.01582, 2019•arxiv.orgBiological systems use energy to maintain non-equilibrium distributions for long times, eg of
chemical concentrations or protein conformations. What are the fundamental limits of the
power used to" hold" a stochastic system in a desired distribution over states? We study the
setting of an uncontrolled Markov chain $ Q $ altered into a controlled chain $ P $ having a
desired stationary distribution. Thermodynamics considerations lead to an appropriately
defined Kullback-Leibler (KL) divergence rate $ D (P|| Q) $ as the cost of control, a setting …
chemical concentrations or protein conformations. What are the fundamental limits of the
power used to" hold" a stochastic system in a desired distribution over states? We study the
setting of an uncontrolled Markov chain $ Q $ altered into a controlled chain $ P $ having a
desired stationary distribution. Thermodynamics considerations lead to an appropriately
defined Kullback-Leibler (KL) divergence rate $ D (P|| Q) $ as the cost of control, a setting …
Biological systems use energy to maintain non-equilibrium distributions for long times, e.g. of chemical concentrations or protein conformations. What are the fundamental limits of the power used to "hold" a stochastic system in a desired distribution over states? We study the setting of an uncontrolled Markov chain altered into a controlled chain having a desired stationary distribution. Thermodynamics considerations lead to an appropriately defined Kullback-Leibler (KL) divergence rate as the cost of control, a setting introduced by Todorov, corresponding to a Markov decision process with mean log loss action cost. The optimal controlled chain minimizes the KL divergence rate subject to a stationary distribution constraint, and the minimal KL divergence rate lower bounds the power used. While this optimization problem is familiar from the large deviations literature, we offer a novel interpretation as a minimum "holding cost" and compute the minimizer more explicitly than previously available. We state a version of our results for both discrete- and continuous-time Markov chains, and find nice expressions for the important case of a reversible uncontrolled chain , for a two-state chain, and for birth-and-death processes.
arxiv.org
Showing the best result for this search. See all results