http://edman.github.io/dynamic-programming-with-trees
https://gist.github.com/anonymous/d609fa7e1d692c48d755a7790b1795bf
trees on a pointer implementation tend not to work well with the traditional dinamic programming memoization based on arrays. How can we make this less complex? Or, do we absolutely need arrays at all? With some thought and intuition I quickly realized that the algorithm scheme showed in the previous section could be improved by making use of the tree structure as the memoization matrix storage. This way memoization matrix access is done implicitly, as opposed to an explicit array.
In this implementation neither there are arrays to be allocated, nor must we create a mapping of nodes to integers. By storing memoization as a payload alongside tree nodes, actual computation related to the problem solution can begin right away.
https://codeforces.com/blog/entry/20935
https://discuss.codechef.com/questions/113081/dp-on-trees-lecture-series-tutorial
https://www.iarcs.org.in/inoi/online-study-material/topics/dp-trees.php
http://edman.github.io/dynamic-programming-with-trees
https://www.geeksforgeeks.org/dynamic-programming-trees-set-1/
https://www.geeksforgeeks.org/dynamic-programming-trees-set-2/
https://blog.csdn.net/eagle_or_snail/article/details/50987044
https://www.commonlounge.com/discussion/8573ee40c4cb4673824c867715a5bc7b
Consider a tree in which every node has a positive value written on it. The task is to select several nodes from the tree (you may also select none), so that the sum of values of the nodes you select is maximized, and no two adjacent nodes are selected. The desired solution has complexity O(n).
https://www.commonlounge.com/discussion/8573ee40c4cb4673824c867715a5bc7b
Consider a tree in which every node has a positive value written on it. The task is to select several nodes from the tree (you may also select none), so that the sum of values of the nodes you select is maximized, and no two adjacent nodes are selected. The desired solution has complexity O(n).
https://gist.github.com/anonymous/d609fa7e1d692c48d755a7790b1795bf
trees on a pointer implementation tend not to work well with the traditional dinamic programming memoization based on arrays. How can we make this less complex? Or, do we absolutely need arrays at all? With some thought and intuition I quickly realized that the algorithm scheme showed in the previous section could be improved by making use of the tree structure as the memoization matrix storage. This way memoization matrix access is done implicitly, as opposed to an explicit array.
In this implementation neither there are arrays to be allocated, nor must we create a mapping of nodes to integers. By storing memoization as a payload alongside tree nodes, actual computation related to the problem solution can begin right away.
https://codeforces.com/blog/entry/20935
https://discuss.codechef.com/questions/113081/dp-on-trees-lecture-series-tutorial
https://www.iarcs.org.in/inoi/online-study-material/topics/dp-trees.php
http://edman.github.io/dynamic-programming-with-trees
https://www.geeksforgeeks.org/dynamic-programming-trees-set-1/
Given a tree with N nodes and N-1 edges, calculate the maximum sum of the node values from root to any of the leaves without re-visiting any node.
The problem can be solved using Dynamic Programming on trees. Start memoizing from the leaves and add the maximum of leaves to the root of every sub-tree. At the last step, there will be root and the sub-tree under it, adding the value at node and maximum of sub-tree will give us the maximum sum of the node values from root to any of the leaves.
Let DPi be the maximum summation of node values in the path between i and any of its leaves moving downwards. Traverse the tree using DFS traversal. Store the maximum of all the leaves of the sub-tree, and add it to the root of the sub-tree. At the end, DP1 will have the maximum sum of the node values from root to any of the leaves without re-visiting any node.
https://blog.csdn.net/eagle_or_snail/article/details/50987044
树形dp是建立在树这种数据结构上的dp,一般状态比较好想,通过dfs维护从根到叶子或从叶子到根的状态转移。
hdu 4514 求树的直径
hdu 4126 Genghis Kehan the Conqueror MST+树形dp 比较经典
hdu 4756 Install Air Conditioning MST+树形dp 同上
hdu 3660 Alice and Bob's Trip 有点像对抗搜索
CF 337D Book of Evil 树直径的思想 思维
信息学竞赛中通常会出现这样的问题:给一棵树,要求以最少的代价(或取得最大收益)完成给定的操作。有很多问题都是在树和最优性的基础上进行了扩充和加强,从而变成了棘手的问题。
这类问题通常规模较大,枚举算法的效率无法胜任,贪心算法不能得到最优解,因此要用动态规划解决。
和一般动态规划问题一样,这类问题的解决要考虑3步。
1、确立状态
几乎所以的问题都要保存以某结点为根的子树的情况,但是要根据具体问题考虑是否要加维,加几维,如何加维。
2、状态转移
状态转移的变化比较多,要根据具体问题具体分析,这也是本文例题分析的重点。
3、算法实现
由于树的结构,使用记忆化搜索比较容易实现。
由于模型建立在树上,即为树型动态规划。
顾名思义,树型动态规划就是在“树”的数据结构上的动态规划。
大部分动态规划题都是线性的,线性的动态规划有二种方向,既向前和向后,相应的线性的动态规划有二种方法,既顺推与逆推。而树型动态规划是建立在树上的,也相应的有两个方向:
根—>叶:既根传递有用的信息给子节点,完后根得出最优解的过程。
叶->根:既根的子节点传递有用的信息给根,完后根得出最优解的过程。这类的习题比较的多,应用比较广泛
当然,还有一类问题,同时需要两种遍历方向,本文的第一题就属于这类。
Chris家的电话铃响起了,里面传出了Chris的老师焦急的声音:“喂,是Chris的家长吗?你们的孩子又没来上课,不想参加考试了吗?”一听说要考试,Chris的父母就心急如焚,他们决定在尽量短的时间内找到Chris。他们告诉Chris的老师:“根据以往的经验,Chris现在必然躲在朋友Shermie或Yashiro家里偷玩《拳皇》游戏。现在,我们就从家出发去找Chris,一但找到,我们立刻给您打电话。”说完砰的一声把电话挂了。
Chris居住的城市由N个居住点和若干条连接居住点的双向街道组成,经过街道x需花费Tx分钟。可以保证,任两个居住点间有且仅有一条通路。Chris家在点C,Shermie和Yashiro分别住在点A和点B。Chris的老师和Chris的父母都有城市地图,但Chris的父母知道点A、B、C的具体位置而Chris的老师不知。
为了尽快找到Chris,Chris的父母会遵守以下两条规则:
如果A距离C比B距离C近,那么Chris的父母先去Shermie家寻找Chris,如果找不到,Chris的父母再去Yashiro家;反之亦然
Chris的父母总沿着两点间唯一的通路行走。
显然,Chris的老师知道Chris的父母在寻找Chris的过程中会遵守以上两条规则,但由于他并不知道A,B,C的具体位置,所以现在他希望你告诉他,最坏情况下Chris的父母要耗费多长时间才能找到Chris?
https://www.commonlounge.com/discussion/8573ee40c4cb4673824c867715a5bc7b
Consider a tree in which every node has a positive value written on it. The task is to select several nodes from the tree (you may also select none), so that the sum of values of the nodes you select is maximized, and no two adjacent nodes are selected. The desired solution has complexity O(n).
https://www.commonlounge.com/discussion/8573ee40c4cb4673824c867715a5bc7b
Consider a tree in which every node has a positive value written on it. The task is to select several nodes from the tree (you may also select none), so that the sum of values of the nodes you select is maximized, and no two adjacent nodes are selected. The desired solution has complexity O(n).
Dynamic Programming(DP) is a technique to solve problems by breaking them down into overlapping sub-problems which follow the optimal substructure. We all know of various problems using DP like subset sum, knapsack, coin change etc. We can also use DP on trees to solve some specific problems.
We define functions for nodes of the trees, which we calculate recursively based on children of a nodes. One of the states in our DP usually is a node i, denoting that we are solving for the subtree of node i.
A common idea in DP on Trees is to have the sub-problems represent answers for subtrees (+ some state, we'll see what this means in a bit). What can be a base case? Surely, it is the smallest subtree, a node. Let us think about what can be the answer for a node.
But there is also another point in here: Note that all logic for a subtree is derived from whether its parent node is taken / not. None of the other ancestors play a part in the calculation.
Inspired by this let us define our DP "state" to be not just a subtree, but a subtree + a boolean representing whether its parent is taken or not.
Now lets see what happens (we write dp[x][true/false] for indicating answer for node x, and true/false whether its parent is taken or not). Let's start with node d again
dp[d][false] = 1 as you can take d (parent not taken).
dp[d][true] = 0 as you cannot take d.
The answers for e and f are also computed similarly and filled in this incomplete "DP tree:
So, in conclusion, this is our DP:
dp[leaf][ptaken] = vleaf if ptaken else 0
dp[node][true] = sum of dp[c][false] over all children c of node
dp[node][false] = max(vnode + sum over dp[c][true] over children c of node, sum over dp[c][false] over children c of node)
Simple Extension
Now let us think about how this problem could be made harder. A simple extension would be to disallow not only selection of parents, but also parent of parent.
Consider a solution of such a problem. Your DP state must carry information about parents and parents of parents. "Information" in this case would mean whether the node is taken or not. Also note that such information is sufficient, as if you know both of them, nothing else in the tree outside of this sub-tree can influence the answer for the sub-tree. In that case the state would be like dp[v][parent taken][parent-of-parent taken]. The complexity would be proportional to the size of the dp, that is 2*2*n or O(n).
In general suppose no parent within a distance of k can be taken. Then the state would be dp[parent taken][parent-of-parent taken]....[parent-of-parent-of-parent-of.... (k times) taken]. Such an algorithm will be of order O(n*2^k).
But this can be improved. Consider the state dp[v][height of closest parent taken / 0 if no parent is taken]. This takes time O(n*k).
Better Extension
Another extension can be that no two nodes at a distance of at-most 2 can be taken. This is significantly more interesting, because of this:
This cannot be captured by any state that looks like dp[v][any-information-about-parents]. Take a few minutes to see if you can come up with a solution.
Lets consider a part of a tree as shown:
Let our DP state be dp[v][1 = parent taken, 2 = parent of parent taken, 0 = none of them taken]. Suppose we've calculated all of them them for b, c, d, e. We want to calculate dp[a][0], dp[a][1], dp[a][2] .
If we're calculating dp[a][1], we cannot take a, b, c, d, e. So our answer for this case is: dp[a][1] = dp[b][2] + dp[c][2] + dp[d][2] + dp[e][2]
If we're calculating dp[a][2], a's parent of parent is taken. So we can take b, c, d, e, but cannot take a. But note that we can take exactly one of b, c, d, e (Why?). So take the best among them, specifically something like, dp[a][2] = max(dp[b][0] + dp[c][2] + dp[d][2] + dp[e][2], ... (symmetrically for c, d, e, in place of b).
There is an interesting point here. Why did we take dp[c][2]? c's parent of parent is not taken. Well if b is taken, its effect on the calculation of c is the same as if c had its parent of parent taken.