Editorial for CEOI '19 P2 - Dynamic Diameter
Submitting an official solution before solving the problem yourself is a bannable offence.
Subtask 1
In the first subtask, the limits are small enough to do literally what is written in the problem statement – after each update, we run DFS from each vertex to compute all pairs of distances and report the largest one.
Complexity .
Subtask 2
In the second subtask, we need to compute the diameter in . There are two ways:
- The first one is a somewhat standard trick. Use DFS from vertex to find the most distant vertex, let's call it . The diameter is then the distance to the most distant vertex from , which can be again found using DFS.
- The second approach is using dynamic programming on a tree, which will be easier to generalise. Root the tree in vertex . For each vertex , compute – the maximum distance to leaf in a subtree of . This can be done in recursively. Then the answer is just maximum (through all vertices ) of and .
Complexity .
Subtask 3
When the graph is a star, it means that the diameter consists only of two edges, both of which are incident to vertex . We can thus simply maintain the weight of each edge in a multiset and output the sum of the two largest values as an answer. Be careful about .
Complexity .
Subtask 4
In the fourth subtask, we modify the DP from the second subtask slightly by introducing a new value to be the diameter of the subtree rooted at . We can see that
and the diameter equals .
Upon changing the weight of the edge , only the values where is an ancestor of need to be updated. As each update can be processed in constant time (the number of immediate children of each vertex is at most ) and as the tree is balanced, its depth is .
Overall, this needs time.
Subtask 5
Assume that the diameter always goes through vertex . This means that the diameter is found by picking two children of vertex – and – such that is maximised.
To calculate each value of , we can use a segment tree that maintains the distances of each vertex to root. When the vertices are ordered using DFS, an update becomes an addition on a range, so any segment tree with lazy propagation is sufficient.
To calculate the maximum of the sum above, we use the approach outlined in subtask 3 – we maintain a multiset of and pick the largest two values.
In subtask 5, we can simply assign . This needs time.
Subtask 6
In subtask 6, we cannot pick any single vertex through which all the diameters traverse. Nevertheless, we can still pick a vertex and find the longest path that goes through vertex to get a lower bound on the answer. Then, we remove from the tree, leaving us with some number of smaller trees. This approach can be recursively applied to them.
Performed naively, this leads to a quadratic performance – for example, consider a path on vertices, where we always select a leaf as the root. We need to limit the depth of this recursion in some way.
We do this by always picking the centroid as a root. Recall that centroid is a vertex that when removed splits a tree into a number of subtrees each having at most vertices where is the size of the original tree. Every tree has or centroids that can be found in linear time using a simple DP.
If we always pick a centroid, then in iterations we end up with a couple of trees of size , for which the answer is , so this can be done in .
Now, we simply maintain the segment tree for each centroid. As each edge is in trees, each update can be performed in . The overall complexity is .
Subtask 6 – alternative approach
Let's root the tree. Recall the trick from subtask 2: To compute the diameter, we should find the deepest (most distant from root) vertex of the tree – let's call it – and then find the maximum distance from . Since we cannot expect anything about , this problem can also be formulated as just finding the farthest vertex from a given vertex.
Instead of centroid decomposition, we can use heavy-light decomposition. For each vertex , there is at most one son connected to by a heavy edge. Let denote the maximum of depths of all descendants of which are not descendants of , inclusive. On each path of the HLD, we should build a segment tree which stores the values , where is the depth of . Also, let's measure these depths from the top of the path containing instead of from the root.
When looking for the maximum distance from vertex , let's move to the root and look at all ancestors of (including itself). When we take the son from which we reached and compute the maximum of depths of descendants of which aren't descendants of (if , there are no such vertices because doesn't exist), then the maximum of lengths of all paths from which have as the LCA is , where the depths are again measured from the top of the path containing . The maximum distance is the maximum of all these lengths and it's easy to extend the process that computes it to find the vertex with this distance.
Of course, we can't afford to do that by brute force, but HLD helps. If the last edge we traversed in order to reach an ancestor was a heavy edge, then . This isn't the case only when we just reached a new heavy path; there are heavy paths on any path to the root and therefore just such vertices. When we're moving from to the root and reach a path in a vertex , we need to compute for this vertex. Then, we know that we must visit all vertices above on this path, so the maximum of for all these vertices is a prefix maximum from the segment tree of path , plus .
How to compute ? With preorder numbering of the vertices, each subtree becomes a segment, so we just need a segment tree that stores depths from the root and is the maximum from two segments minus the depth of the top of the current path. This segment tree lets us find depths of individual vertices as well.
Now, we just need to handle updates. When we add to the weight of an edge from to ( is a son of ), we add to the depths from the root of all vertices in the subtree of . The values of can only change for and the ancestors of which are reached by a light edge. There are only of them, so we can afford to recompute for each of them. Recall that we're storing in the tree, so we need to also subtract from all vertices on the path containing the updated edge, located below that edge ( changes only for these vertices – this is why we aren't using depths from the root). Therefore, the segment trees built over HLD paths need to support operations "update value", "add to segment" and "get maximum from segment".
The time complexity of this approach is , since we're handling paths per query and doing segment tree operations per path.
Comments