Submitting an official solution before solving the problem yourself is a bannable offence.
Observe that each operation either only affects even-indexed elements, or odd-indexed elements. We can maintain two lazy segment trees supporting range increase updates, range set updates, and point queries, with one segment tree corresponding to each parity.
We can represent updates with as the combination of two updates with . The rest is the same as subtask 1.
Imagine that all values of were the same. We can do something very similar as in subtask 1. Suppose we rearrange the array with indices sorted by the tuple (index mod , index), so that indices of the same congruence class modulo are adjacent in ascending order. For example, if and , we would reorder the indices in the order . Every update now corresponds to a single subarray of this rearranged array. We can build a lazy segment tree over this rearranged array, supporting range increase updates and point queries.
Now, we will consider the original problem. We could build a segment tree for every possible value of , where we update the relevant segment tree depending on the value of , and answer the queries by performing a point query at every segment tree and summing the results. However, this is too slow.
We should also consider the naive solution where we simply loop through the affected elements and update their values. An operation would take time, which is fast when is large.
We can actually combine our segment tree solution with the naive solution. We will create segment trees corresponding to updates with , and naively perform updates with . We can perform updates in and queries in , giving a final complexity of . Here, it is optimal to choose .
Observe that updates only affect a single subarray, whereas queries involve point queries. We can therefore sacrifice speed for our range update operations in order to improve the complexity of our point queries. This can be done using a SQRT bucketing structure, which allows us to perform range updates in and point queries in . Our final complexity is therefore . Here, it is optimal to choose .
It's also possible to pass just by constant optimising the subtask solution, using fenwick trees instead of segment trees.
As a final note, we can reduce the space complexity to by solving the problem independently for each value of , and then combining the results.
The reason why our subtask solution does not work is because the updates are no longer commutative; a range set update may partially invalidate some of the previous range increase updates. However, if we can determine for each query, the last range set update which affected the queried element, then we can subtract off the contribution of all range increase updates which occured before the range set update.
We can use essentially the same technique as before to compute the last range set update which affected each index. Once we've done this, we simply run through the operations again, but this time, we have twice as many queries: each of the original queries is the difference of two new queries, with one of them being at the time of the last range set update.
Note that there is an online solution where we use persistent lazy segment trees instead of performing a second pass through the operations, but this is more complicated and doesn't extend to subtask 6.
Combine subtasks 4 and 5.