Пређи на садржај

Markovljev proces odlučivanja — разлика између измена

С Википедије, слободне енциклопедије
Садржај обрисан Садржај додат
.
(нема разлике)

Верзија на датум 20. март 2024. у 08:42

Шаблон:Short description

In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming. MDPs were known at least as early as the 1950s;[1] a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, Dynamic Programming and Markov Processes.[2] They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The name of MDPs comes from the Russian mathematician Andrey Markov as they are an extension of Markov chains.

At each time step, the process is in some state , and the decision maker may choose any action that is available in state . The process responds at the next time step by randomly moving into a new state , and giving the decision maker a corresponding reward .

The probability that the process moves into its new state is influenced by the chosen action. Specifically, it is given by the state transition function . Thus, the next state depends on the current state and the decision maker's action . But given and , it is conditionally independent of all previous states and actions; in other words, the state transitions of an MDP satisfy the Markov property.

Markov decision processes are an extension of Markov chains; the difference is the addition of actions (allowing choice) and rewards (giving motivation). Conversely, if only one action exists for each state (e.g. "wait") and all rewards are the same (e.g. "zero"), a Markov decision process reduces to a Markov chain.

Definition

Example of a simple MDP with three states (green circles) and two actions (orange circles), with two rewards (orange arrows)

A Markov decision process is a 4-tuple , where:

  • is a set of states called the state space,
  • is a set of actions called the action space (alternatively, is the set of actions available from state ),
  • is the probability that action in state at time will lead to state at time ,
  • is the immediate reward (or expected immediate reward) received after transitioning from state to state , due to action

The state and action spaces may be finite or infinite, for example the set of real numbers. Some processes with countably infinite state and action spaces can be reduced to ones with finite state and action spaces.[3]

A policy function is a (potentially probabilistic) mapping from state space () to action space ().

Optimization objective

The goal in a Markov decision process is to find a good "policy" for the decision maker: a function that specifies the action that the decision maker will choose when in state . Once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain (since the action chosen in state is completely determined by and reduces to , a Markov transition matrix).

The objective is to choose a policy that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:

(where we choose , i.e. actions given by the policy). And the expectation is taken over

where is the discount factor satisfying , which is usually close to 1 (for example, for some discount rate r). A lower discount factor motivates the decision maker to favor taking actions early, rather than postpone them indefinitely.

A policy that maximizes the function above is called an optimal policy and is usually denoted . A particular MDP may have multiple distinct optimal policies. Because of the Markov property, it can be shown that the optimal policy is a function of the current state, as assumed above.

Simulator models

In many cases, it is difficult to represent the transition probability distributions, , explicitly. In such cases, a simulator can be used to model the MDP implicitly by providing samples from the transition distributions. One common form of implicit MDP model is an episodic environment simulator that can be started from an initial state and yields a subsequent state and reward every time it receives an action input. In this manner, trajectories of states, actions, and rewards, often called episodes may be produced.

Another form of simulator is a generative model, a single step simulator that can generate samples of the next state and reward given any state and action.[4] (Note that this is a different meaning from the term generative model in the context of statistical classification.) In algorithms that are expressed using pseudocode, is often used to represent a generative model. For example the expression might denote the action of sampling from the generative model where and are the current state and action, and and are the new state and reward. Compared to an episodic simulator, a generative model has the advantage that it can yield data from any state, not only those encountered in a trajectory.

These model classes form a hierarchy of information content: an explicit model trivially yields a generative model through sampling from the distributions, and repeated application of a generative model yields an episodic simulator. In the opposite direction, it is only possible to learn approximate models through regression. The type of model available for a particular MDP plays a significant role in determining which solution algorithms are appropriate. For example, the dynamic programming algorithms described in the next section require an explicit model, and Monte Carlo tree search requires a generative model (or an episodic simulator that can be copied at any state), whereas most reinforcement learning algorithms require only an episodic simulator.

Reference

  1. ^ Bellman, R. (1957). „A Markovian Decision Process”. Journal of Mathematics and Mechanics. 6 (5): 679—684. JSTOR 24900506. 
  2. ^ Howard, Ronald A. (1960). Dynamic Programming and Markov Processes. The M.I.T. Press. 
  3. ^ Wrobel, A. (1984). „On Markovian Decision Models with a Finite Skeleton”. Mathematical Methods of Operations Research. 28 (February): 17—27. S2CID 2545336. doi:10.1007/bf01919083. 
  4. ^ Kearns, Michael; Mansour, Yishay; Ng, Andrew (2002). „A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes”. Machine Learning. 49 (193–208): 193—208. doi:10.1023/A:1017932429737Слободан приступ. 

Literatura

Spoljašnje veze