Editorial for Baltic OI '02 P2 - Tennis Club


Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.
Submitting an official solution before solving the problem yourself is a bannable offence.

It's quite easy to recognize that this task is about reconstructing a graph from its degree sequence:

The important part from the above links is the following:

Hakimi (1962) and Havel (1955) showed that if a degree sequence is graphic, then there exists a graph G such that the node v of the highest degree is adjacent to the d next highest degree vertices of G, where d is the maximum degree of G.

Applying the above result recursively to the graph G-v, we can conclude that the problem is solvable using the following greedy algorithm:

\displaystyle \begin{aligned}
& \text{order the vertices in non-increasing order of degrees} \\
& \text{let $v$ be the first vertex from the ordered list (i.e. the one with maximal degree)}\\
& \text{let $d$ be the degree of $v$}\\
& \text{if $d = 0$ then stop: we're done}\\
&\text{for $i = 1$ to $d$ do}\\
&\quad \text{let $v'$ be the $i + 1^\text{th}$ vertex from the ordered list}\\
&\quad \text{let $d'$ be the degree of $v'$}\\
&\quad \text{if $d' = 0$ then stop: we have a contradiction}\\
&\quad \text{add edge $(v, v')$ to the graph}\\
&\quad \text{set degree of $v'$ equal to $d' - 1$}\\
&\text{end for}\\
&\text{set degree of $v$ equal to $0$}\\
&\text{start over from beginning}\\
\end{aligned}

When implemented naively, the above algorithm can take N^3 steps in the worst case: the list of vertices is sorted N times, with N^2 steps spent on each sorting. An obvious optimization is to use a better sorting algorithm, which yields N^2 \log N steps in the worst case.

Another optimization is to take advantage of the fact that, after the first pass, the list of vertices is already mostly sorted. Leaving out the head element, which we can ignore during the next iterations, the rest of the list consists of two sorted sub-lists with known endpoints. Normally this kind of list could be re-sorted using a merge algorithm in linear time. In this case we need to do even less than a full merge, because we know that initially the list was sorted and the degree of each item in the first sub-list was reduced by one while the second sub-list did not change at all.

The above optimization does not give us better worst-case complexity if the indices have to be sorted on each output line, because for a complete or near-complete graph sorting the lists of indices would require about N \log N operations for each vertex, consuming a total of N^2 \log N operations. However, because the indices are from a very narrow range, we can sort them in linear time using the counter-sort, and thus truly reduce the worst-case complexity to N^2. Or, we could construct the graph into an adjacency matrix to avoid the counting step in the counter-sort.

Yet another optimization is to count the total sum of degrees when reading the data and use its parity as a quick rejection test. This does not change the worst-case complexity, but could still be handy in a real-life application if the problem source did not eliminate such inputs by definition.


Comments

There are no comments at the moment.