IT박스

나누기와 정복 알고리즘과 동적 프로그래밍의 차이점

itboxs 2020. 7. 10. 08:12
반응형

나누기와 정복 알고리즘과 동적 프로그래밍의 차이점


나누기와 정복 알고리즘과 동적 프로그래밍 알고리즘의 차이점은 무엇입니까? 두 용어는 어떻게 다릅니 까? 나는 그들 사이의 차이점을 이해하지 못한다.

이 두 가지의 차이점과 비슷한 근거를 설명하는 간단한 예를 들어보십시오.


나누고 정복

Divide and Conquer는 문제를 하위 문제로 나누고 각 하위 문제를 재귀 적으로 정복하고 이러한 솔루션을 결합하여 작동합니다.

다이나믹 프로그래밍

동적 프로그래밍은 하위 문제가 겹치는 문제를 해결하는 기술입니다. 각 하위 문제는 한 번만 해결되며 각 하위 문제의 결과는 나중에 참조 할 수 있도록 테이블 (일반적으로 배열 또는 해시 테이블로 구현 됨)에 저장됩니다. 이러한 하위 솔루션은 원래 솔루션을 얻는 데 사용될 수 있으며 하위 문제 솔루션을 저장하는 기술은 메모라고합니다.

당신은 생각할 수 있습니다 DP = recursion + re-use

차이점을 이해하는 고전적인 예는 n 번째 피보나치 수를 얻기위한 두 가지 접근 방식을 모두 보는 것입니다. 자료 를 MIT에서 확인하십시오 .


나누고 정복 접근 나누고 정복 접근

다이나믹 프로그래밍 접근법 여기에 이미지 설명을 입력하십시오


나누기와 정복과 동적 프로그래밍의 다른 차이점은 다음과 같습니다.

나누고 정복하십시오.

  1. 하위 문제에 대해 더 많은 작업을 수행하므로 더 많은 시간이 소비됩니다.
  2. 분할과 정복에서 하위 문제는 서로 독립적입니다.

다이나믹 프로그래밍 :

  1. 하위 문제를 한 번만 해결 한 다음 테이블에 저장합니다.
  2. 동적 프로그래밍에서 하위 문제는 독립적이지 않습니다.

때로는 반복적으로 프로그래밍 할 때 필요하지 않은 동일한 매개 변수를 가진 함수를 여러 번 호출합니다.

유명한 피보나치 수 예 :

           index: 1,2,3,4,5,6...
Fibonacci number: 1,1,2,3,5,8...

function F(n) {
    if (n < 3)
        return 1
    else
        return F(n-1) + F(n-2)
}

F (5)를 실행 해 봅시다 :

F(5) = F(4) + F(3)
     = {F(3)+F(2)} + {F(2)+F(1)}
     = {[F(2)+F(1)]+1} + {1+1}
     = 1+1+1+1+1

1 번 F (4) 2 번 F (3) 3 번 F (2) 2 번 F (1)

동적 프로그래밍 접근 : 동일한 매개 변수를 가진 함수를 두 번 이상 호출하면 결과를 변수에 저장하여 다음에 직접 액세스하십시오. 반복적 인 방법 :

if (n==1 || n==2)
    return 1
else
    f1=1, f2=1
    for i=3 to n
         f = f1 + f2
         f1 = f2
         f2 = f

F (5)를 다시 호출 해 봅시다 :

fibo1 = 1
fibo2 = 1 
fibo3 = (fibo1 + fibo2) = 1 + 1 = 2
fibo4 = (fibo2 + fibo3) = 1 + 2 = 3
fibo5 = (fibo3 + fibo4) = 2 + 3 = 5

보시다시피, 다중 호출이 필요할 때마다 해당 변수에 액세스하여 값을 다시 계산하는 대신 값을 가져옵니다.

By the way, dynamic programming doesn't mean to convert a recursive code into an iterative code. You can also save the subresults into a variable if you want a recursive code. In this case the technique is called memoization. For our example it looks like this:

// declare and initialize a dictionary
var dict = new Dictionary<int,int>();
for i=1 to n
    dict[i] = -1

function F(n) {
    if (n < 3)
        return 1
    else
    {
        if (dict[n] == -1)
            dict[n] = F(n-1) + F(n-2)

        return dict[n]                
    }
}

So the relationship to the Divide and Conquer is that D&D algorithms rely on recursion. And some versions of them has this "multiple function call with the same parameter issue." Search for "matrix chain multiplication" and "longest common subsequence" for such examples where DP is needed to improve the T(n) of D&D algo.


Dynamic Programming and Divide-and-Conquer Similarities

As I see it for now I can say that dynamic programming is an extension of divide and conquer paradigm.

I would not treat them as something completely different. Because they both work by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.

So why do we still have different paradigm names then and why I called dynamic programming an extension. It is because dynamic programming approach may be applied to the problem only if the problem has certain restrictions or prerequisites. And after that dynamic programming extends divide and conquer approach with memoization or tabulation technique.

Let’s go step by step…

Dynamic Programming Prerequisites/Restrictions

As we’ve just discovered there are two key attributes that divide and conquer problem must have in order for dynamic programming to be applicable:

  • Optimal substructure — optimal solution can be constructed from optimal solutions of its subproblems

  • Overlapping sub-problems — problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems

Once these two conditions are met we can say that this divide and conquer problem may be solved using dynamic programming approach.

Dynamic Programming Extension for Divide and Conquer

Dynamic programming approach extends divide and conquer approach with two techniques (memoization and tabulation) that both have a purpose of storing and re-using sub-problems solutions that may drastically improve performance. For example naive recursive implementation of Fibonacci function has time complexity of O(2^n) where DP solution doing the same with only O(n) time.

Memoization (top-down cache filling) refers to the technique of caching and reusing previously computed results. The memoized fib function would thus look like this:

memFib(n) {
    if (mem[n] is undefined)
        if (n < 2) result = n
        else result = memFib(n-2) + memFib(n-1)

        mem[n] = result
    return mem[n]
}

Tabulation (bottom-up cache filling) is similar but focuses on filling the entries of the cache. Computing the values in the cache is easiest done iteratively. The tabulation version of fib would look like this:

tabFib(n) {
    mem[0] = 0
    mem[1] = 1
    for i = 2...n
        mem[i] = mem[i-2] + mem[i-1]
    return mem[n]
}

You may read more about memoization and tabulation comparison here.

The main idea you should grasp here is that because our divide and conquer problem has overlapping sub-problems the caching of sub-problem solutions becomes possible and thus memoization/tabulation step up onto the scene.

So What the Difference Between DP and DC After All

Since we’re now familiar with DP prerequisites and its methodologies we’re ready to put all that was mentioned above into one picture.

동적 프로그래밍 대 나누기 및 정복

If you want to see code examples you may take a look at more detailed explanation here where you'll find two algorithm examples: Binary Search and Minimum Edit Distance (Levenshtein Distance) that are illustrating the difference between DP and DC.


I assume you have already read Wikipedia and other academic resources on this, so I won't recycle any of that information. I must also caveat that I am not a computer science expert by any means, but I'll share my two cents on my understanding of these topics...

Dynamic Programming

Breaks the problem down into discrete subproblems. The recursive algorithm for the Fibonacci sequence is an example of Dynamic Programming, because it solves for fib(n) by first solving for fib(n-1). In order to solve the original problem, it solves a different problem.

Divide and Conquer

These algorithms typically solve similar pieces of the problem, and then put them together at the end. Mergesort is a classic example of divide and conquer. The main difference between this example and the Fibonacci example is that in a mergesort, the division can (theoretically) be arbitrary, and no matter how you slice it up, you are still merging and sorting. The same amount of work has to be done to mergesort the array, no matter how you divide it up. Solving for fib(52) requires more steps than solving for fib(2).


I think of Divide & Conquer as an recursive approach and Dynamic Programming as table filling.

For example, Merge Sort is a Divide & Conquer algorithm, as in each step, you split the array into two halves, recursively call Merge Sort upon the two halves and then merge them.

KnapsackA는 Dynamic Programming당신이 전체 배낭의 하위 문제에 대한 최적의 솔루션을 나타내는 테이블을 작성하는 등의 알고리즘입니다. 표의 각 항목은 항목 1-j가 주어진 무게의 봉지에 담을 수있는 최대 값에 해당합니다.

참고 URL : https://stackoverflow.com/questions/13538459/difference-between-divide-and-conquer-algo-and-dynamic-programming

반응형