Editorial for COCI '16 Contest 2 #3 Nizin


Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.
Submitting an official solution before solving the problem yourself is a bannable offence.

Let's first analyze the suboptimal solutions from the SCORING section.

For 30\% of points, the task could have been solved using an exhaustive search. In other words, we could have modified the array in every possible way and, in the end, output the minimal number of moves that lead to the solution. Since we can make N-1 different moves on an array of length N, and because after each move the length of the array decreases by 1, we conclude that the complexity of this solution is \mathcal O(N!).

Let's first observe the first and last member of the array. Since we wish to transform the array to a palindrome, the first and the last member must be equal in the end. If they are not equal at the given moment, obviously at least one of them must be joined with its neighbour. This kind of thinking leads us to a recursive formulation of the solution. Let f(l, r) denote the minimal number of moves it takes for elements A[l], A[l+1], \dots, A[r] to transform to a palindrome. Given previous remarks,

\displaystyle \begin{align*}
l > r, f(l,r) &= 0 \\
l \le r, f(l,r) &= \min(1+f(l+1, r), 1+f(l, r-1), \mathbf{f(l+1, r-1)})
\end{align*}

where the part written in bold is taken into consideration only if it holds A[l] = A[r]. Let's notice that in each step of the algorithm, the value A[l] is equal to the sum of all its predecessors, and A[r] is equal to the sum of all its successors, whereas the elements in the middle are not modified. Using a dynamic programming approach, more precisely the memoization technique, the complexity of such a solution is \mathcal O(N^2), which is enough for 60\% of points.

In order to obtain all points, we need to use the fact that the numbers in the input are positive. As in the previous paragraph, we approach the task from the outside within. When we focus on an interval [l, r], we differentiate between a couple of cases:

If it holds A[l] = A[r], then we don't need to join the outer elements, instead we continue solving the interval [l+1, r-1].

If it holds A[l] < A[r], then we surely can't profit from joining elements A[r] and A[r-1] because their sum is necessarily larger than A[l] that can't be left unjoined. Therefore, we will join elements A[l] and A[l+1] and continue solving the interval [l+1, r]. We analogously solve the case where A[l] > A[r].

Since we will decrease the interval for at least 1 in each step of the algorithm, we can conclude that the complexity of the algorithm is \mathcal O(N), sufficient for all points.


Comments

There are no comments at the moment.