Editorial for DMOPC '18 Contest 3 P6 - Bob and Suffering
Submitting an official solution before solving the problem yourself is a bannable offence.
For the first subtask, brute-force all possible subarrays for each query.
For the second subtask, consider each element. Using two stacks, we can determine the index of the rightmost element to the left of it which is strictly less than it, and the leftmost element to the right of it which is strictly less than it. Then for each query, we only need to loop through each element instead of all subarrays.
For the third subtask, note that the answer to each query is either the suffering of the longest subarray which contains only the greater value, or it is the suffering of the entire subarray. The first value can be computed using a segment tree. The other value is easy to compute.
There are two solutions to the fourth subtask. One simply extends the solution to the third subtask. The other, which we will discuss, sheds light on the full solution.
Let be the rightmost element to the left of which is strictly less than and be the leftmost element to the right of which is strictly less that . As mentioned in the solution to subtask 2, these values can be computed in . Also, remember that for a minimum element at index , we only need to consider its subarray from to .
Consider a subarray contained within an array. There are three cases: the subarray contains the leftmost element, it contains the rightmost element, or it contains neither. For the first case, note that we only need to consider when , , , and so on are the indices of the minimum elements. Let be the number of distinct values. Since is small, this can be done quickly. The second case is handled similarly.
For the third case, we only need to consider when and . We can use a segment tree and handle the queries offline.
Some users have noticed that the subtasks were taken from IOI '18 F. Indeed, the full solution bears some similarities.
For the final subtask, we speed up the solution described for subtask 4. Specifically, the first two cases must be sped up. Note that the values defined implicitly build two trees. Also, when stopping at a certain index (which was some ) in the first case, then the suffering obtained is , where .
So the first two cases become a convex hull problem on the path of two trees. This can be done by performing heavy-light decomposition and building a segment tree of convex hulls. Sorting the queries based on left/right endpoint (depending on which case) can speed up the per query to per query (during testing, there was very little change in performance from this optimization).