Editorial for DMOPC '21 Contest 2 P6 - Strange Function


Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.
Submitting an official solution before solving the problem yourself is a bannable offence.

Author: zhouzixiang2004

Throughout this editorial, we refer to f(1,N) as just f. We also use 1-indexing for all indices.

Subtask 1

For subtask 1, it suffices to simulate the function f explicitly for each type 1 query and handle type 2 and 3 queries naively. However, the implementation of f given in the problem statement is \mathcal{O}(N^2), which is not fast enough. There are many ways to optimize f to \mathcal{O}(N), one of which is to observe that all the indices i involved in a recursive call to f(1,N) are indices where the suffix minimum changes and find these with e.g. a stack.

Time complexity: \mathcal{O}(NQ)

Subtask 2

For subtask 2, observe that p_1 = 1 after one call to f, and similarly p_i = i for all 1 \le i \le k after k calls to f. Hence p converges to the identity permutation after only N-1 calls to f, so we don't need to continue updating p after this point. To handle type 3 queries in constant time, we also maintain the inverse of p and update this after each of the first N-1 type 1 queries.

Time complexity: \mathcal{O}(N^2 + Q)

Subtask 3

For subtask 3, we essentially need to efficiently simulate k calls to f in a row.

Consider where the maximum element of p ends up after k calls. Unless it gets "pushed" all the way to the right end of the permutation, it will never be selected as an index i in a call to f and thus it will move one index to the right each time. Hence if p_j = N initially, then p_{\min(j+k,N)} = N after k calls. As for the other elements, the maximum element does not interfere with their relative orders during a call to f. Consider deleting this maximum element, finding the final permutation if this maximum element didn't exist, and inserting it back into the final permutation at the correct position. Repeating this reasoning with the second maximum element, third maximum element, and so on, we can efficiently simulate these operations with a data structure such as an order statistic tree (policy based data structures in C++).

Time complexity: \mathcal{O}(N \log N + Q)

Subtask 4

For subtask 4, we need to handle general type 2 queries.

Some observations/ideas that motivate the solution:

It might help to think of the queries as "given i and k, what is p_i after k calls to f?" instead of trying to maintain p after each individual call to f. This suggests handling all the queries offline.

It also might help to find a simple way of thinking about what f does. Two interpretations include:

  • Consider drawing a histogram with N positions in a row and p_i square blocks at position i. Then f is equivalent to letting all the blocks get pushed horizontally by one unit, with a "wall" after the rightmost position preventing blocks from getting pushed after this point.
  • f is also equivalent to one iteration of a bubble-sort like loop:
    void f(int l,int r) {
        int i;
        for (i = r; i > l; i--) {
            if (p[i-1] > p[i]) swap(p[i-1],p[i]);
        }
    }
    

Now for the solution:

For a number m, consider writing a 1 at index i if p_i \ge m and a 0 if p_i < m. Focusing on what f does to this binary "slice" of p, we can find that k calls to f is equivalent to moving all the 1's by k to the right, then "pushing" any 1's that overflowed the right end back into the array, forming a suffix of entirely 1's. If we have the set of all 1's in the original p stored in a data structure like a binary indexed tree, then we can check whether index i (1 \le i \le N) is a 1 after k calls with two cases:

  1. i is not in the suffix. We can check if index i-k exists and is a 1.
  2. i is in the suffix. This occurs if there are at least N-i+1 1's between i-k and N.

Using this, we can perform a simultaneous binary search on all the queries.

Remark: The second way of thinking about f means that f is a "layer" of a bubble sorting network. A standard idea regarding sorting networks is the 0-1 principle. This might help motivate the simultaneous binary search approach above.

Time complexity: \mathcal{O}((N+Q) \log^2 N)

Subtask 5

For subtask 5, we need to handle general type 3 queries.

We can process the queries in decreasing order of i while maintaining a binary indexed tree with a 1 at index j if p_j \ge i and 0 otherwise, similarly to subtask 4. For each query, do a binary search on this binary indexed tree to find the length of the suffix that is entirely 1's after k calls. All 1's before this point get shifted by exactly k to the right. Using this and another binary indexed tree for sums of indices, we can find the sum of all indices j where p_j \ge i after k calls. By creating two events for each query, one at i and one at i+1, we can subtract these two sums to answer each query.

Time complexity: \mathcal{O}((N+Q) \log^2 N)

Subtask 6

For subtask 6, we can combine the solutions for type 2 and 3 queries described in subtasks 4 and 5.

Time complexity: \mathcal{O}((N+Q) \log^2 N)

Even Faster Solutions

It is actually possible to solve both type 2 and 3 queries in \mathcal{O}((N+Q) \log N) if we analyze what the above algorithms are really calculating.

During the simultaneous binary search for type 2, if i \le k then the answer is simply i as it is in the converged area. Otherwise, case 1 is equivalent to checking if p_{i-k} \ge m and case 2 is equivalent to checking if the k+1th smallest p_j with i-k \le j \le N is at least m. It follows that the answer for a type 2 query (i,k) is equal to \max(p_{i-k},(k+1) \text{th smallest in } \{p_{i-k}, p_{i-k+1}, \dots, p_N\}) if i > k and i if i \le k. These kth smallest queries can be answered by sweeping through i in decreasing order and maintaining an order statistic tree.

For type 3, the binary search to find the length of the suffix is equivalent to finding the smallest index l such that there are exactly k 0's between l and N. We can do this in \mathcal{O}(\log N) with order statistics as well.

There exist some simpler \mathcal{O}((N+Q) \log N) solutions for type 3 as well, based on observing that each number will always move right by 1 until it reaches a suffix of numbers greater than itself. After that, it will move left every time a larger number before it "pushes" it to the left. Finding the number of times the number gets "pushed" to the left turns out to be equivalent to some order statistic queries too.

Time complexity: \mathcal{O}((N+Q) \log N)

Further Thoughts

There is a very unexpected connection between this problem and kth smallest queries, for both type 2 and type 3 queries. It is not obvious that the type 2 formula above is indeed still a permutation of 1, 2, \dots, N!

The solution presented above for type 3 queries can just as easily answer queries of the form 3 l r: Output the sum of the indices j with l \le p_j \le r. It is possible to generalize type 2 queries as well (2 l r: Output the sum of all p_i with l \le i \le r) with a solution based on splitting the sum into two parts: the numbers in the suffix and the numbers that are not, using some more data structures to handle these.


Comments

There are no comments at the moment.