Editorial for Baltic OI '05 P6 - Polygon


Remember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.
Submitting an official solution before solving the problem yourself is a bannable offence.

With the task calling for any convex polygon as the output, there can be many different ways to approach this task.

The one taken by the example solution is to try to lay out the vertices on a circle. In this case, there are three possibilities that the program must handle:

  • if the longest edge is at least as long as the others combined, then there's no solution;
  • if the longest edge of the polygon is "sufficiently long", the center of the circle will be outside of the polygon; in this case, we will say the polygon is "thin";
  • otherwise, we will say the polygon is "round".

As we don't have a way to compute the correct diameter of the circle at once from the edge lengths, we start from a known minimum (for the polygon to fit in the circle, the diameter must not be less than the length of the longest edge), and keep doubling the radius until we get a circle too big. After that, we can find the correct size of the circle using binary search between the last too small and the first too big.

Let's first consider the case when the polygon is "round": if all edges have equal lengths (say, A), then we start from the diameter A, and end up with the diameter of about \frac{N \times A}{\pi}. Therefore, there may be no more than \mathcal O(\log N) doublings needed, with \mathcal O(N) work to be done for each doubling.

In the "thin" case, the diameter of the circle grows as the polygon gets thinner. Therefore, the worst case occurs when the difference between the length of the longest edge and the total of others is minimal, that is, when we have the edge lengths 10\,000, 5\,000, and 5\,001. It may be computed even with just pencil and paper that in this case, the final diameter of the circle is about 35 times the initial, which means only about \log 35 doublings, which is even less than for the worst "round" case.

Thus, in the worst case, the first too big diameter is about \frac{2 \times N \times A}{\pi}, and we have to perform the binary search in a range of size \frac{N \times A}{\pi}. If we estimate the required precision P of the diameter to be about the same as the required precision of the edge lengths, this means we have to perform \mathcal O(\log(\frac{N \times A}{P})) steps in the search, with \mathcal O(N) work to be done for each step.

Altogether we have to perform about N \times \log(\frac{N \times A}{P}) operations, which is about 30\,000 under the limits given for N, A, and P in the task text. Even considering the operations are rather expensive (with each one containing several steps of floating-point computations, function calls, etc), the algorithm is still quite efficient.


Comments

There are no comments at the moment.