SHPORA.net :: PDA

Login:
регистрация

Main
FAQ

гуманитарные науки
естественные науки
математические науки
технические науки
Search:
Title: | Body:

Dünaamiline kavandamine (dynamic programming). Pikima ühise osasõne leidmine.


Dynamic programming

From Wikipedia, the free encyclopedia





Jump to: navigation, search



In computer science, dynamic programming is a method of solving problems exhibiting the properties of overlapping subproblems and optimal substructure (described below) that takes much less time than naïve methods.



Optimal substructure means that optimal solutions of subproblems can be used to find the optimal solutions of the overall problem. For example, the shortest path to a goal from a vertex in an acyclic graph can be found by first computing the shortest path to the goal from all adjacent vertices, and then using this to pick the best overall path, as shown in Figure 1. In general, we can solve a problem with optimal substructure using a three-step process:



1. Break the problem into smaller subproblems.

2. Solve these problems optimally using this three-step process recursively.

3. Use these optimal solutions to construct an optimal solution for the original problem.



The subproblems are, themselves, solved by dividing them into sub-subproblems, and so on, until we reach some simple case that is easy to solve.





Figure 2. The subproblem graph for the Fibonacci sequence. That it is not a tree but a DAG indicates overlapping subproblems.



To say that a problem has overlapping subproblems is to say that the same subproblems are used to solve many different larger problems. For example, in the Fibonacci sequence, F3 = F1 + F2 and F4 = F2 + F3 ? computing each number involves computing F2. Because both F3 and F4 are needed to compute F5, a naïve approach to computing F5 may end up computing F2 twice or more. This applies whenever overlapping subproblems are present: a naïve approach may waste time recomputing optimal solutions to subproblems it has already solved.



In order to avoid this, we instead save the solutions to problems we have already solved. Then, if we need to solve the same problem later, we can retrieve and reuse our already-computed solution. This approach is called memoization (not memorization, although this term also fits). If we are sure we won't need a particular solution anymore, we can throw it away to save space. In some cases, we can even compute the solutions to subproblems we know that we'll need in advance.



In summary, dynamic programming makes use of:



* Overlapping subproblems

* Optimal substructure

* Memoization



Dynamic programming usually takes one of two approaches:



* Top-down approach: The problem is broken into subproblems, and these subproblems are solved and the solutions remembered, in case they need to be solved again. This is recursion and memoization combined together.

* Bottom-up approach: All subproblems that might be needed are solved in advance and then used to build up solutions to larger problems. This approach is slightly better in stack space and number of function calls, but it is sometimes not intuitive to figure out all the subproblems needed for solving the given problem.



Some programming languages with special extensions [1] can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as call-by-need). Some languages (e.g., Maple) have automatic memoization builtin. This is only possible for a function which has no side-effects, which is always true in pure functional languages but seldom true in imperative languages.

Pikima ühise osasõne otsimine



Osasõne on saadud antud sõnest (võib-olla) migite sümbolite väljajätmise teel. Iga sõne osasõnedeks on tühisõne ja see sõne ise, kokku on variante halvimal juhul 2n, kus n on sõne pikkus (halvim juht realiseerub näiteks siis, kui kõik sümbolid on erinevad).



Olgu antud sõne s pikkusega m ja sõne t pikkusega n. Leida niisugune sõne u pikkusega k, et u oleks nii s kui ka t osasõneks ja k oleks maksimaalne kõigi võimaluste hulgast. Selle ülesande lahend ei pruugi olla ühene - ühepikkuseid pikimaid ühiseid osasõnesid võib olla mitu (aga pikkus k on üheselt määratud).



Paneme tähele järgmisi tõsiasju:



1. Kui s ja t lõpud langevad kokku, siis on see ühine lõpp ka lahendi u lõpp ja me saame ülesande taandada selle lõpu võrra lühemate sõnede juhtumile. Kui see nii ei oleks, siis saaksime sõnet u pikendada, aga u on eelduse kohaselt pikim.

2. Kui s ja t viimased sümbolid ei lange kokku, siis valime s ja t hulgast selle, kumba viimane sümbol ei lange kokku u viimase sümboliga, ning eemaldame selle sümboli (tegelikult võiks lõpust eemaldada sümboleid kuni jõutakse u viimase sümboli esinemiseni) ja taandame ülesande lühema sõne juhtumile (u jääb samaks).



Seega saaks lahendit leida järgmise (ebaefektiivse) rekursiivse algoritmiga:





See on väga ebaefektiivne korduvalt samade alamülesannete lahendamise tõttu. Otstarbekas oleks alamülesannete vastused meeles pidada ja neid kasutada ülesande lahendamiseks (vrd. Fibonacci arvude arvutamine). Niisugune lähenemine kattuvate alamülesannetega algoritmile kannab nimetust dünaamiline kavandamine.



Antud ülesande jaoks koostame m x n abimaatriksi, mille element indeksitega i,j on s|i ja t|j pikima ühise osasõne pikkus.Rajatingimustena lisame maatriksile nullinda rea ja nullinda veeru, mille elementide väärtused on nullid. Arvutame selle tabeli ette välja: keerukus O(mn).