Content deleted Content added
→Algorithms that use dynamic programming: The technique is so general that a list of examples doesn't make much sense in my opinion |
|||
(9 intermediate revisions by 8 users not shown) | |||
Line 1:
{{bots|deny=OAbot}}<!-- To prevent re-addition of bogus pmc -->▼
{{Short description|Problem optimization method}}
{{Distinguish|Dynamic programming language|Dynamic problem}}
▲{{bots|deny=OAbot}}<!-- To prevent re-addition of bogus pmc -->
[[File:Shortest path optimal substructure.svg|thumb|upright=0.8|'''Figure 1.''' Finding the shortest path in a graph using optimal substructure; a straight line indicates a single edge; a wavy line indicates a shortest path between the two vertices it connects (among other paths, not shown, sharing the same two vertices); the bold line is the overall shortest path from start to goal.]]
Line 13:
=== Mathematical optimization ===
In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time.
This is done by defining a sequence of '''value functions''' ''V''<sub>1</sub>, ''V''<sub>2</sub>, ..., ''V''<sub>''n''</sub> taking ''y'' as an argument representing the '''[[State variable|state]]''' of the system at times ''i'' from 1 to ''n''.
The definition of ''V''<sub>''n''</sub>(''y'') is the value obtained in state ''y'' at the last time ''n''.
The values ''V''<sub>''i''</sub> at earlier times ''i'' = ''n'' −1, ''n'' − 2, ..., 2, 1 can be found by working backwards, using a [[Recursion|recursive]] relationship called the [[Bellman equation]].
For ''i'' = 2, ..., ''n'', ''V''<sub>''i''−1</sub> at any state ''y'' is calculated from ''V''<sub>''i''</sub> by maximizing a simple function (usually the sum) of the gain from a decision at time ''i'' − 1 and the function ''V''<sub>''i''</sub> at the new state of the system if this decision is made.
Since ''V''<sub>''i''</sub> has already been calculated for the needed states, the above operation yields ''V''<sub>''i''−1</sub> for those states.
Finally, ''V''<sub>1</sub> at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
Line 87:
''Optimal substructure'' means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means of [[recursion]]. For example, given a graph ''G=(V,E)'', the shortest path ''p'' from a vertex ''u'' to a vertex ''v'' exhibits optimal substructure: take any intermediate vertex ''w'' on this shortest path ''p''. If ''p'' is truly the shortest path, then it can be split into sub-paths ''p<sub>1</sub>'' from ''u'' to ''w'' and ''p<sub>2</sub>'' from ''w'' to ''v'' such that these, in turn, are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in ''[[Introduction to Algorithms]]''). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the [[Bellman–Ford algorithm]] or the [[Floyd–Warshall algorithm]] does.
''Overlapping'' sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci
[[Image:Fibonacci dynamic programming.svg|thumb|108px|'''Figure 2.''' The subproblem graph for the Fibonacci sequence. The fact that it is not a [[tree structure|tree]] indicates overlapping subproblems.]]
Line 99:
=== Bioinformatics ===
Dynamic programming is widely used in bioinformatics for tasks such as [[sequence alignment]], [[protein folding]], RNA structure prediction and protein-DNA binding. The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently by [[Charles DeLisi]] in
| last = Delisi | first = Charles
| date = July 1974
| doi = 10.1002/bip.1974.360130719
| issue = 7
| journal = Biopolymers
| pages = 1511–1512
| title = Cooperative phenomena in homopolymers: An alternative formulation of the partition function
| volume = 13}}</ref> and by Georgii Gurskii and Alexander Zasedatelev in the [[Soviet Union]].<ref>{{citation
| last1 = Gurskiĭ | first1 = G. V.
| last2 = Zasedatelev | first2 = A. S.
| date = September 1978
| issue = 5
| journal = Biofizika
| pages = 932–946
| pmid = 698271
| title = Precise relationships for calculating the binding of regulatory proteins and other lattice ligands in double-stranded polynucleotides
| volume = 23}}</ref> Recently these algorithms have become very popular in bioinformatics and [[computational biology]], particularly in the studies of [[nucleosome]] positioning and [[transcription factor]] binding.
== Examples: computer algorithms ==
Line 547 ⟶ 564:
</syntaxhighlight>
== History of the name ==
The term ''dynamic programming'' was originally used in the 1940s by [[Richard Bellman]] to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions,<ref>Stuart Dreyfus. [https://web.archive.org/web/20050110161049/http://www.wu-wien.ac.at/usr/h99c/h9951826/bellman_dynprog.pdf "Richard Bellman on the birth of Dynamical Programming"].</ref> and the field was thereafter recognized by the [[IEEE]] as a [[systems analysis]] and [[engineering]] topic. Bellman's contribution is remembered in the name of the [[Bellman equation]], a central result of dynamic programming which restates an optimization problem in [[Recursion (computer science)|recursive]] form.
Line 560 ⟶ 577:
The word ''dynamic'' was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive.<ref name="Eddy">{{cite journal |last=Eddy |first=S. R. |author-link=Sean Eddy |title=What is Dynamic Programming? |journal=Nature Biotechnology |volume=22 |issue= 7|pages=909–910 |year=2004 |doi=10.1038/nbt0704-909 |pmid=15229554 |s2cid=5352062 }}</ref> The word ''programming'' referred to the use of the method to find an optimal ''program'', in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases ''[[linear programming]]'' and ''mathematical programming'', a synonym for [[mathematical optimization]].<ref>{{cite book |last1=Nocedal |first1=J. |last2=Wright |first2=S. J. |title=Numerical Optimization |url=https://archive.org/details/numericaloptimiz00noce_639 |url-access=limited |page=[https://archive.org/details/numericaloptimiz00noce_639/page/n21 9] |publisher=Springer |year=2006 |isbn=9780387303031 }}</ref>
The above explanation of the origin of the term may be inaccurate: According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953."<ref>{{cite book |last1=Russell |first1=S. |last2=Norvig |first2=P. |title=Artificial Intelligence: A Modern Approach |edition=3rd |publisher=Prentice Hall |year=2009 |isbn=978-0-13-207148-2 }}</ref> Also, [
== See also ==
Line 566 ⟶ 583:
<!-- alphabetical order please [[WP:SEEALSO]] -->
<!-- please add a short description [[WP:SEEALSO]], via {{subst:AnnotatedListOfLinks}} or {{Annotated link}} -->
{{div col|colwidth=30em|small=
* {{Annotated link |Convexity in economics}}
* {{Annotated link |Greedy algorithm}}
Line 593 ⟶ 610:
{{external links|date=March 2016}}
* [http://mat.gsia.cmu.edu/classes/dynamic/dynamic.html A Tutorial on Dynamic programming]
* [https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2020/resources/lecture-15-dynamic-programming-part-1-srtbot-fib-dags-bowling/ MIT course on algorithms] - Includes 4 video lectures on DP, lectures
* [http://web.mit.edu/15.053/www/AMP.htm Applied Mathematical Programming] by Bradley, Hax, and Magnanti, [http://web.mit.edu/15.053/www/AMP-Chapter-11.pdf Chapter 11]
* [http://www.csse.monash.edu.au/~lloyd/tildeAlgDS/Dynamic More DP Notes]
Line 599 ⟶ 616:
* [http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=dynProg Dynamic Programming: from novice to advanced] A TopCoder.com article by Dumitru on Dynamic Programming
* [https://bibiserv.cebitec.uni-bielefeld.de/adp/welcome.html Algebraic Dynamic Programming] – a formalized framework for dynamic programming, including an [https://bibiserv.cebitec.uni-bielefeld.de/cgi-bin/dpcourse entry-level course] to DP, University of Bielefeld
* Dreyfus, Stuart, "[http://www.cas.mcmaster.ca/~se3c03/journal_papers/dy_birth.pdf Richard Bellman on the birth of Dynamic Programming.] {{Webarchive|url=https://web.archive.org/web/20201013233916/http://www.cas.mcmaster.ca/~se3c03/journal_papers/dy_birth.pdf |date=2020-10-13 }}"
* [https://web.archive.org/web/20080626183359/http://www.avatar.se/lectures/molbioinfo2001/dynprog/dynamic.html Dynamic programming tutorial]
* [http://www.cambridge.org/resources/0521882672/7934_kaeslin_dynpro_new.pdf A Gentle Introduction to Dynamic Programming and the Viterbi Algorithm]
|