Editorial for WC '15 Finals S4 - Server Hacking
Submitting an official solution before solving the problem yourself is a bannable offence.
For starters, we need to be able to compare a pair of public keys and determine which one is weaker. This can be done by prime factorizing each one to compute its private key, and then comparing the private keys by the earliest index at which they differ. The process can be simplified and sped up in the average case by prime factorizing both values simultaneously, stopping as soon one of their private keys has been found to be lexicographically smaller than the other, though this is not necessary.
Prime factorizing a single integer is a standard procedure which can be accomplished in time by looping a variable up from to . At each iteration, if is divisible by , then must be a prime factor of , at which point we can repeatedly divide by as long as it's still divisible by it in order to determine 's exponent in the factorization. Once has exceeded , if the final value of is still larger than , it must be a prime factor itself (with an exponent of ).
A naive solution consists of testing if each computer is a valid weakpoint, until one is found. In the worst case, this requires comparing every pair of adjacent computers, which requires prime factorizing all public keys at least once. As such, the time complexity is , where is the largest value of any computer. Since and (so is roughly less than ), the worst-case number of steps taken will be billion, which is too slow. The factor can be optimized through advanced prime factorization techniques such as the Sieve of Eratosthenes, Fermat's method, or Pollard's rho algorithm. However, these techniques can be dubious to depend on during a live contest. To achieve a better time complexity, we can instead try to optimize the factor – in other words, find a way to somehow not have to prime factorize every public key.
We can formulate this problem in another way. If we let be the private key of the -th computer, then we're looking for any local minimum of the function . Despite the fact that is only defined at integer values, we can use a technique common for finding local minima of continuous curves – namely, ternary search. At all points in the ternary search, we'll maintain the invariant that at least one local minimum must exist in the range of indices . Initially, and , and we'll stop once . Let's now compare the values of the function one-third and two-thirds between and – specifically, if we let and , we want to compare to ( is rounded down while is rounded up so that, when , we'll have and ). Now, if , then there must be at least one local minimum in the range of indices , so we can set . Otherwise, if , we can set . At each iteration of this ternary search, the size of the range reduces by approximately , meaning that it will be reduced to size one after iterations. Therefore, the time complexity of this algorithm is , which is sufficient to pass.
The ternary search can also be replaced with a very similar binary search. Consider a range of the array . First, we can check to make sure that neither nor (the endpoints) are weakpoints. If even one of them is a weakpoint, then we are done. Otherwise, we can find the midpoint to break the interval in half. There are two cases:
- :
\.../.../
- in which we should recursively examine the left subinterval - :
\...\.../
- in which we should recursively examine the right subinterval
Eventually, we must have found a local minimum when the length of the interval is reduced to . In fact, a careful implementation will render it unnecessary to check the endpoints. The time complexity of this algorithm is also , which is faster than the previous ternary search solution by only a constant factor.
Comments